Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1489496 Details for
Bug 1635314
[ERROR]: The python-notario library is missing. Please install it on the node
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
/var/lib/mistral/overcloud/ansible.log
ansible.log (text/plain), 8.70 MB, created by
Filip Hubík
on 2018-10-02 15:23:49 UTC
(
hide
)
Description:
/var/lib/mistral/overcloud/ansible.log
Filename:
MIME Type:
Creator:
Filip Hubík
Created:
2018-10-02 15:23:49 UTC
Size:
8.70 MB
patch
obsolete
>2018-10-02 08:28:46,363 p=1004 u=mistral | Using /var/lib/mistral/overcloud/ansible.cfg as config file >2018-10-02 08:28:47,330 p=1004 u=mistral | PLAY [Gather facts from undercloud] ******************************************** >2018-10-02 08:28:47,342 p=1004 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-10-02 08:28:47,343 p=1004 u=mistral | Tuesday 02 October 2018 08:28:47 -0400 (0:00:00.076) 0:00:00.076 ******* >2018-10-02 08:29:00,676 p=1004 u=mistral | ok: [undercloud] >2018-10-02 08:29:00,694 p=1004 u=mistral | PLAY [Gather facts from overcloud] ********************************************* >2018-10-02 08:29:00,712 p=1004 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-10-02 08:29:00,712 p=1004 u=mistral | Tuesday 02 October 2018 08:29:00 -0400 (0:00:13.369) 0:00:13.446 ******* >2018-10-02 08:29:04,722 p=1004 u=mistral | ok: [compute-0] >2018-10-02 08:29:04,831 p=1004 u=mistral | ok: [controller-0] >2018-10-02 08:29:04,874 p=1004 u=mistral | ok: [ceph-0] >2018-10-02 08:29:04,901 p=1004 u=mistral | PLAY [Load global variables] *************************************************** >2018-10-02 08:29:04,924 p=1004 u=mistral | TASK [include_vars] ************************************************************ >2018-10-02 08:29:04,924 p=1004 u=mistral | Tuesday 02 October 2018 08:29:04 -0400 (0:00:04.211) 0:00:17.657 ******* >2018-10-02 08:29:05,000 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.26]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.26]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.17]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.8]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.8]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.8]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.8]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.8]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.10]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.11]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.12]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.10]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.15]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.12]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.12]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.12]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.20]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.15]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.31]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.20]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.19]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.104]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.10]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.10]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-10-02 08:29:05,018 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.26]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.26]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.17]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.8]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.8]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.8]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.8]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.8]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.10]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.11]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.12]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.10]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.15]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.12]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.12]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.12]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.20]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.15]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.31]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.20]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.19]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.104]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.10]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.10]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-10-02 08:29:05,044 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.26]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.26]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.17]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.8]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.8]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.8]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.8]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.8]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.10]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.11]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.12]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.10]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.15]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.12]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.12]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.12]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.20]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.15]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.31]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.20]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.19]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.104]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.10]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.10]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-10-02 08:29:05,084 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.26]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.26]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.17]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.8]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.8]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.8]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.8]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.8]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.10]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.11]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.12]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.10]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.15]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.12]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.12]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.12]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.20]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.15]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.31]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.20]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.19]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.104]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.10]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.10]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-10-02 08:29:05,092 p=1004 u=mistral | PLAY [Common roles for TripleO servers] **************************************** >2018-10-02 08:29:05,121 p=1004 u=mistral | TASK [tripleo-bootstrap : Deploy required packages to bootstrap TripleO] ******* >2018-10-02 08:29:05,121 p=1004 u=mistral | Tuesday 02 October 2018 08:29:05 -0400 (0:00:00.197) 0:00:17.855 ******* >2018-10-02 08:29:05,960 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-10-02 08:29:05,963 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-10-02 08:29:05,969 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-10-02 08:29:05,993 p=1004 u=mistral | TASK [tripleo-bootstrap : Check required packages are installed] *************** >2018-10-02 08:29:05,993 p=1004 u=mistral | Tuesday 02 October 2018 08:29:05 -0400 (0:00:00.872) 0:00:18.727 ******* >2018-10-02 08:29:06,396 p=1004 u=mistral | changed: [ceph-0] => (item=openstack-heat-agents) => {"changed": true, "cmd": ["rpm", "-q", "openstack-heat-agents"], "delta": "0:00:00.038431", "end": "2018-10-02 08:29:06.373593", "item": "openstack-heat-agents", "rc": 0, "start": "2018-10-02 08:29:06.335162", "stderr": "", "stderr_lines": [], "stdout": "openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch", "stdout_lines": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 08:29:06,400 p=1004 u=mistral | changed: [compute-0] => (item=openstack-heat-agents) => {"changed": true, "cmd": ["rpm", "-q", "openstack-heat-agents"], "delta": "0:00:00.037084", "end": "2018-10-02 08:29:06.377631", "item": "openstack-heat-agents", "rc": 0, "start": "2018-10-02 08:29:06.340547", "stderr": "", "stderr_lines": [], "stdout": "openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch", "stdout_lines": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 08:29:06,405 p=1004 u=mistral | changed: [controller-0] => (item=openstack-heat-agents) => {"changed": true, "cmd": ["rpm", "-q", "openstack-heat-agents"], "delta": "0:00:00.034042", "end": "2018-10-02 08:29:06.376573", "item": "openstack-heat-agents", "rc": 0, "start": "2018-10-02 08:29:06.342531", "stderr": "", "stderr_lines": [], "stdout": "openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch", "stdout_lines": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 08:29:06,593 p=1004 u=mistral | changed: [ceph-0] => (item=jq) => {"changed": true, "cmd": ["rpm", "-q", "jq"], "delta": "0:00:00.034350", "end": "2018-10-02 08:29:06.573229", "item": "jq", "rc": 0, "start": "2018-10-02 08:29:06.538879", "stderr": "", "stderr_lines": [], "stdout": "jq-1.3-4.el7ost.x86_64", "stdout_lines": ["jq-1.3-4.el7ost.x86_64"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 08:29:06,596 p=1004 u=mistral | changed: [compute-0] => (item=jq) => {"changed": true, "cmd": ["rpm", "-q", "jq"], "delta": "0:00:00.036463", "end": "2018-10-02 08:29:06.574774", "item": "jq", "rc": 0, "start": "2018-10-02 08:29:06.538311", "stderr": "", "stderr_lines": [], "stdout": "jq-1.3-4.el7ost.x86_64", "stdout_lines": ["jq-1.3-4.el7ost.x86_64"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 08:29:06,598 p=1004 u=mistral | [WARNING]: Consider using the yum, dnf or zypper module rather than running >rpm. If you need to use command because yum, dnf or zypper is insufficient you >can add warn=False to this command task or set command_warnings=False in >ansible.cfg to get rid of this message. > >2018-10-02 08:29:06,601 p=1004 u=mistral | changed: [controller-0] => (item=jq) => {"changed": true, "cmd": ["rpm", "-q", "jq"], "delta": "0:00:00.036997", "end": "2018-10-02 08:29:06.579007", "item": "jq", "rc": 0, "start": "2018-10-02 08:29:06.542010", "stderr": "", "stderr_lines": [], "stdout": "jq-1.3-4.el7ost.x86_64", "stdout_lines": ["jq-1.3-4.el7ost.x86_64"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 08:29:06,626 p=1004 u=mistral | TASK [tripleo-bootstrap : Create /var/lib/heat-config/tripleo-config-download directory for deployment data] *** >2018-10-02 08:29:06,626 p=1004 u=mistral | Tuesday 02 October 2018 08:29:06 -0400 (0:00:00.632) 0:00:19.359 ******* >2018-10-02 08:29:07,008 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:29:07,010 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:29:07,011 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:29:07,033 p=1004 u=mistral | TASK [tripleo-ssh-known-hosts : Add hosts key in /etc/ssh/ssh_known_hosts for live/cold-migration] *** >2018-10-02 08:29:07,033 p=1004 u=mistral | Tuesday 02 October 2018 08:29:07 -0400 (0:00:00.407) 0:00:19.767 ******* >2018-10-02 08:29:07,406 p=1004 u=mistral | changed: [controller-0] => (item=controller-0) => {"backup": "", "changed": true, "item": "controller-0", "msg": "line added"} >2018-10-02 08:29:07,423 p=1004 u=mistral | changed: [compute-0] => (item=controller-0) => {"backup": "", "changed": true, "item": "controller-0", "msg": "line added"} >2018-10-02 08:29:07,428 p=1004 u=mistral | changed: [ceph-0] => (item=controller-0) => {"backup": "", "changed": true, "item": "controller-0", "msg": "line added"} >2018-10-02 08:29:07,608 p=1004 u=mistral | changed: [controller-0] => (item=ceph-0) => {"backup": "", "changed": true, "item": "ceph-0", "msg": "line added"} >2018-10-02 08:29:07,626 p=1004 u=mistral | changed: [compute-0] => (item=ceph-0) => {"backup": "", "changed": true, "item": "ceph-0", "msg": "line added"} >2018-10-02 08:29:07,652 p=1004 u=mistral | changed: [ceph-0] => (item=ceph-0) => {"backup": "", "changed": true, "item": "ceph-0", "msg": "line added"} >2018-10-02 08:29:07,817 p=1004 u=mistral | changed: [controller-0] => (item=compute-0) => {"backup": "", "changed": true, "item": "compute-0", "msg": "line added"} >2018-10-02 08:29:07,845 p=1004 u=mistral | changed: [compute-0] => (item=compute-0) => {"backup": "", "changed": true, "item": "compute-0", "msg": "line added"} >2018-10-02 08:29:07,883 p=1004 u=mistral | changed: [ceph-0] => (item=compute-0) => {"backup": "", "changed": true, "item": "compute-0", "msg": "line added"} >2018-10-02 08:29:07,893 p=1004 u=mistral | PLAY [Overcloud deploy step tasks for step 0] ********************************** >2018-10-02 08:29:07,899 p=1004 u=mistral | PLAY [Server deployments] ****************************************************** >2018-10-02 08:29:07,925 p=1004 u=mistral | TASK [include_tasks] *********************************************************** >2018-10-02 08:29:07,925 p=1004 u=mistral | Tuesday 02 October 2018 08:29:07 -0400 (0:00:00.891) 0:00:20.658 ******* >2018-10-02 08:29:08,547 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0, ceph-0, compute-0 >2018-10-02 08:29:08,571 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 08:29:08,594 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0, ceph-0, compute-0 >2018-10-02 08:29:08,617 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 08:29:08,640 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 08:29:08,665 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 08:29:08,688 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 08:29:08,712 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 08:29:08,736 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 08:29:08,759 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 08:29:08,784 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 08:29:08,807 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 08:29:08,831 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 08:29:08,854 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 08:29:08,878 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 08:29:08,903 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 08:29:08,927 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 08:29:08,950 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 08:29:08,974 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 08:29:08,998 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 08:29:09,020 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 08:29:09,044 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 08:29:09,068 p=1004 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 08:29:09,100 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:29:09,101 p=1004 u=mistral | Tuesday 02 October 2018 08:29:09 -0400 (0:00:01.175) 0:00:21.834 ******* >2018-10-02 08:29:09,181 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "dcedf467-41a0-4cc2-b500-aa5a4377918a"}, "changed": false} >2018-10-02 08:29:09,206 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "4573d575-a235-4a9b-84b2-f082c2b86c1c"}, "changed": false} >2018-10-02 08:29:09,216 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "c216e59d-d4de-46df-8925-341ebb100a51"}, "changed": false} >2018-10-02 08:29:09,242 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:29:09,242 p=1004 u=mistral | Tuesday 02 October 2018 08:29:09 -0400 (0:00:00.141) 0:00:21.976 ******* >2018-10-02 08:29:09,310 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:29:09,337 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:29:09,366 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:29:09,394 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:29:09,395 p=1004 u=mistral | Tuesday 02 October 2018 08:29:09 -0400 (0:00:00.152) 0:00:22.128 ******* >2018-10-02 08:29:09,429 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,458 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,472 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,499 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:29:09,499 p=1004 u=mistral | Tuesday 02 October 2018 08:29:09 -0400 (0:00:00.104) 0:00:22.233 ******* >2018-10-02 08:29:09,530 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,558 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,570 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,596 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:29:09,597 p=1004 u=mistral | Tuesday 02 October 2018 08:29:09 -0400 (0:00:00.097) 0:00:22.330 ******* >2018-10-02 08:29:09,626 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,651 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,670 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,700 p=1004 u=mistral | TASK [Render deployment file for NetworkDeployment for check-mode] ************* >2018-10-02 08:29:09,701 p=1004 u=mistral | Tuesday 02 October 2018 08:29:09 -0400 (0:00:00.103) 0:00:22.434 ******* >2018-10-02 08:29:09,730 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,756 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,770 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,796 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:29:09,796 p=1004 u=mistral | Tuesday 02 October 2018 08:29:09 -0400 (0:00:00.095) 0:00:22.530 ******* >2018-10-02 08:29:09,826 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,852 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,865 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,890 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:29:09,890 p=1004 u=mistral | Tuesday 02 October 2018 08:29:09 -0400 (0:00:00.093) 0:00:22.624 ******* >2018-10-02 08:29:09,921 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,947 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,960 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:09,984 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:09,984 p=1004 u=mistral | Tuesday 02 October 2018 08:29:09 -0400 (0:00:00.093) 0:00:22.718 ******* >2018-10-02 08:29:10,015 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:10,098 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:10,113 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:10,137 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:10,137 p=1004 u=mistral | Tuesday 02 October 2018 08:29:10 -0400 (0:00:00.153) 0:00:22.871 ******* >2018-10-02 08:29:10,167 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:10,192 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:29:10,207 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:29:10,231 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:29:10,231 p=1004 u=mistral | Tuesday 02 October 2018 08:29:10 -0400 (0:00:00.094) 0:00:22.965 ******* >2018-10-02 08:29:10,260 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:10,284 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:10,296 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:10,320 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:29:10,320 p=1004 u=mistral | Tuesday 02 October 2018 08:29:10 -0400 (0:00:00.088) 0:00:23.054 ******* >2018-10-02 08:29:10,349 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:10,374 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:29:10,391 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:29:10,418 p=1004 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-10-02 08:29:10,418 p=1004 u=mistral | Tuesday 02 October 2018 08:29:10 -0400 (0:00:00.097) 0:00:23.152 ******* >2018-10-02 08:29:11,206 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "5de4d0c343b37dc2594f83e7f934e2c1f2cf5316", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-dcedf467-41a0-4cc2-b500-aa5a4377918a", "gid": 0, "group": "root", "md5sum": "e7339c3e4eabd4801ba7f739e064d592", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483350.48-74951604534847/source", "state": "file", "uid": 0} >2018-10-02 08:29:11,207 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "993f51ab25bcae7a2cd2e71eb289e0338fb4fc91", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-4573d575-a235-4a9b-84b2-f082c2b86c1c", "gid": 0, "group": "root", "md5sum": "a2e9692f1977c02497cd8be4d1b699c9", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8774, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483350.51-162714001170788/source", "state": "file", "uid": 0} >2018-10-02 08:29:11,210 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "3c4fb6625396781fc9cbbdd9e6afabdcfaacbfbe", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-c216e59d-d4de-46df-8925-341ebb100a51", "gid": 0, "group": "root", "md5sum": "d2f5a8d57c37d8f91bb0bac3b3511952", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9259, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483350.53-70753810685948/source", "state": "file", "uid": 0} >2018-10-02 08:29:11,235 p=1004 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-10-02 08:29:11,235 p=1004 u=mistral | Tuesday 02 October 2018 08:29:11 -0400 (0:00:00.817) 0:00:23.969 ******* >2018-10-02 08:29:11,438 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:11,456 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:11,490 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:11,518 p=1004 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-10-02 08:29:11,518 p=1004 u=mistral | Tuesday 02 October 2018 08:29:11 -0400 (0:00:00.282) 0:00:24.251 ******* >2018-10-02 08:29:11,549 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:11,577 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:11,589 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:11,617 p=1004 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-10-02 08:29:11,617 p=1004 u=mistral | Tuesday 02 October 2018 08:29:11 -0400 (0:00:00.099) 0:00:24.350 ******* >2018-10-02 08:29:11,648 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:11,675 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:11,689 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:11,716 p=1004 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-10-02 08:29:11,717 p=1004 u=mistral | Tuesday 02 October 2018 08:29:11 -0400 (0:00:00.099) 0:00:24.450 ******* >2018-10-02 08:29:11,748 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:11,774 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:11,793 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:11,822 p=1004 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-10-02 08:29:11,823 p=1004 u=mistral | Tuesday 02 October 2018 08:29:11 -0400 (0:00:00.106) 0:00:24.556 ******* >2018-10-02 08:29:27,533 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/4573d575-a235-4a9b-84b2-f082c2b86c1c.notify.json)", "delta": "0:00:15.471487", "end": "2018-10-02 08:29:27.502323", "rc": 0, "start": "2018-10-02 08:29:12.030836", "stderr": "[2018-10-02 08:29:12,057] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/4573d575-a235-4a9b-84b2-f082c2b86c1c.json\n[2018-10-02 08:29:27,067] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 08:29:17 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-10-02 08:29:27,067] (heat-config) [DEBUG] [2018-10-02 08:29:12,080] (heat-config) [INFO] interface_name=nic1\n[2018-10-02 08:29:12,081] (heat-config) [INFO] bridge_name=br-ex\n[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866\n[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-53btivfojecp-0-r67qhgbpx2gg-NetworkDeployment-5facrk3a3uwg-TripleOSoftwareDeployment-yikex5dxnnvh/768a0d72-8ef8-4651-8bbe-d4c60d20e2d7\n[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:29:12,081] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/4573d575-a235-4a9b-84b2-f082c2b86c1c\n[2018-10-02 08:29:27,063] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-10-02 08:29:27,063] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth1\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth0\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth0\n[2018/10/02 08:29:17 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan40\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan40\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-10-02 08:29:27,063] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/4573d575-a235-4a9b-84b2-f082c2b86c1c\n\n[2018-10-02 08:29:27,067] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:29:27,068] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4573d575-a235-4a9b-84b2-f082c2b86c1c.json < /var/lib/heat-config/deployed/4573d575-a235-4a9b-84b2-f082c2b86c1c.notify.json\n[2018-10-02 08:29:27,495] (heat-config) [INFO] \n[2018-10-02 08:29:27,496] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:12,057] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/4573d575-a235-4a9b-84b2-f082c2b86c1c.json", "[2018-10-02 08:29:27,067] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 08:29:17 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-10-02 08:29:27,067] (heat-config) [DEBUG] [2018-10-02 08:29:12,080] (heat-config) [INFO] interface_name=nic1", "[2018-10-02 08:29:12,081] (heat-config) [INFO] bridge_name=br-ex", "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-53btivfojecp-0-r67qhgbpx2gg-NetworkDeployment-5facrk3a3uwg-TripleOSoftwareDeployment-yikex5dxnnvh/768a0d72-8ef8-4651-8bbe-d4c60d20e2d7", "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:29:12,081] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/4573d575-a235-4a9b-84b2-f082c2b86c1c", "[2018-10-02 08:29:27,063] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-10-02 08:29:27,063] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.", "[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.", "[2018/10/02 08:29:12 AM] [INFO] Finding active nics", "[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic", "[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic", "[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic", "[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic", "[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2", "[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1", "[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0", "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0", "[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0", "[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated", "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1", "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30", "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40", "[2018/10/02 08:29:12 AM] [INFO] applying network configs...", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth1", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth0", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated", "[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1", "[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth0", "[2018/10/02 08:29:17 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan40", "[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan40", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-10-02 08:29:27,063] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/4573d575-a235-4a9b-84b2-f082c2b86c1c", "", "[2018-10-02 08:29:27,067] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:29:27,068] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4573d575-a235-4a9b-84b2-f082c2b86c1c.json < /var/lib/heat-config/deployed/4573d575-a235-4a9b-84b2-f082c2b86c1c.notify.json", "[2018-10-02 08:29:27,495] (heat-config) [INFO] ", "[2018-10-02 08:29:27,496] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:29:32,373 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/c216e59d-d4de-46df-8925-341ebb100a51.notify.json)", "delta": "0:00:20.276977", "end": "2018-10-02 08:29:32.343160", "rc": 0, "start": "2018-10-02 08:29:12.066183", "stderr": "[2018-10-02 08:29:12,095] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c216e59d-d4de-46df-8925-341ebb100a51.json\n[2018-10-02 08:29:31,868] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2\\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 08:29:14 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 08:29:30 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-10-02 08:29:31,868] (heat-config) [DEBUG] [2018-10-02 08:29:12,120] (heat-config) [INFO] interface_name=nic1\n[2018-10-02 08:29:12,120] (heat-config) [INFO] bridge_name=br-ex\n[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756\n[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-trdtsryyz37p-0-5bmhuuygu7de-NetworkDeployment-wxqapq2sz5vt-TripleOSoftwareDeployment-jh64jyl4m6s4/fba4136c-14b9-4a96-a9bd-a118119d1484\n[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:29:12,120] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c216e59d-d4de-46df-8925-341ebb100a51\n[2018-10-02 08:29:31,863] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-10-02 08:29:31,863] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth2\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1\n[2018/10/02 08:29:14 AM] [INFO] running ifup on interface: eth0\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: vlan20\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan50\n[2018/10/02 08:29:30 AM] [INFO] running ifup on interface: vlan20\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-10-02 08:29:31,863] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c216e59d-d4de-46df-8925-341ebb100a51\n\n[2018-10-02 08:29:31,868] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:29:31,869] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c216e59d-d4de-46df-8925-341ebb100a51.json < /var/lib/heat-config/deployed/c216e59d-d4de-46df-8925-341ebb100a51.notify.json\n[2018-10-02 08:29:32,335] (heat-config) [INFO] \n[2018-10-02 08:29:32,336] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:12,095] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c216e59d-d4de-46df-8925-341ebb100a51.json", "[2018-10-02 08:29:31,868] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2\\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 08:29:14 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 08:29:30 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-10-02 08:29:31,868] (heat-config) [DEBUG] [2018-10-02 08:29:12,120] (heat-config) [INFO] interface_name=nic1", "[2018-10-02 08:29:12,120] (heat-config) [INFO] bridge_name=br-ex", "[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", "[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-trdtsryyz37p-0-5bmhuuygu7de-NetworkDeployment-wxqapq2sz5vt-TripleOSoftwareDeployment-jh64jyl4m6s4/fba4136c-14b9-4a96-a9bd-a118119d1484", "[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:29:12,120] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c216e59d-d4de-46df-8925-341ebb100a51", "[2018-10-02 08:29:31,863] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-10-02 08:29:31,863] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.", "[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.", "[2018/10/02 08:29:12 AM] [INFO] Finding active nics", "[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic", "[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic", "[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic", "[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic", "[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2", "[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1", "[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0", "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0", "[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0", "[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated", "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1", "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20", "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30", "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50", "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2", "[2018/10/02 08:29:12 AM] [INFO] applying network configs...", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated", "[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth2", "[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1", "[2018/10/02 08:29:14 AM] [INFO] running ifup on interface: eth0", "[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: vlan20", "[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan50", "[2018/10/02 08:29:30 AM] [INFO] running ifup on interface: vlan20", "[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-10-02 08:29:31,863] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c216e59d-d4de-46df-8925-341ebb100a51", "", "[2018-10-02 08:29:31,868] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:29:31,869] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c216e59d-d4de-46df-8925-341ebb100a51.json < /var/lib/heat-config/deployed/c216e59d-d4de-46df-8925-341ebb100a51.notify.json", "[2018-10-02 08:29:32,335] (heat-config) [INFO] ", "[2018-10-02 08:29:32,336] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:29:41,308 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/dcedf467-41a0-4cc2-b500-aa5a4377918a.notify.json)", "delta": "0:00:29.255047", "end": "2018-10-02 08:29:41.279059", "rc": 0, "start": "2018-10-02 08:29:12.024012", "stderr": "[2018-10-02 08:29:12,054] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/dcedf467-41a0-4cc2-b500-aa5a4377918a.json\n[2018-10-02 08:29:40,806] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.31/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.31/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-ex\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: br-ex\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2\\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-ex\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:35 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-10-02 08:29:40,806] (heat-config) [DEBUG] [2018-10-02 08:29:12,081] (heat-config) [INFO] interface_name=nic1\n[2018-10-02 08:29:12,081] (heat-config) [INFO] bridge_name=br-ex\n[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf\n[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6lxm3zwfyvyb-0-as5h5kvla5s5-NetworkDeployment-mj7fxli7xvf2-TripleOSoftwareDeployment-hiu4ksuunz2j/1076a213-27f5-4274-8224-513c471da110\n[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:29:12,082] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/dcedf467-41a0-4cc2-b500-aa5a4377918a\n[2018-10-02 08:29:40,801] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-10-02 08:29:40,802] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.31/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.31/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-ex\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: br-ex\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-ex\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-ex\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth2\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth1\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth0\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan50\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan20\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 08:29:35 AM] [INFO] running ifup on interface: vlan40\n[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan20\n[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan40\n[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-10-02 08:29:40,802] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/dcedf467-41a0-4cc2-b500-aa5a4377918a\n\n[2018-10-02 08:29:40,806] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:29:40,807] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/dcedf467-41a0-4cc2-b500-aa5a4377918a.json < /var/lib/heat-config/deployed/dcedf467-41a0-4cc2-b500-aa5a4377918a.notify.json\n[2018-10-02 08:29:41,272] (heat-config) [INFO] \n[2018-10-02 08:29:41,272] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:12,054] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/dcedf467-41a0-4cc2-b500-aa5a4377918a.json", "[2018-10-02 08:29:40,806] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.31/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.31/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-ex\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: br-ex\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2\\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-ex\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:35 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-10-02 08:29:40,806] (heat-config) [DEBUG] [2018-10-02 08:29:12,081] (heat-config) [INFO] interface_name=nic1", "[2018-10-02 08:29:12,081] (heat-config) [INFO] bridge_name=br-ex", "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6lxm3zwfyvyb-0-as5h5kvla5s5-NetworkDeployment-mj7fxli7xvf2-TripleOSoftwareDeployment-hiu4ksuunz2j/1076a213-27f5-4274-8224-513c471da110", "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:29:12,082] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/dcedf467-41a0-4cc2-b500-aa5a4377918a", "[2018-10-02 08:29:40,801] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-10-02 08:29:40,802] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.31/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.31/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.", "[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.", "[2018/10/02 08:29:12 AM] [INFO] Finding active nics", "[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic", "[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic", "[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic", "[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic", "[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2", "[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1", "[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0", "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0", "[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0", "[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated", "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1", "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20", "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30", "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40", "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50", "[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-ex", "[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: br-ex", "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2", "[2018/10/02 08:29:12 AM] [INFO] applying network configs...", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50", "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-ex", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated", "[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-ex", "[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth2", "[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth1", "[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth0", "[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan50", "[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan20", "[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 08:29:35 AM] [INFO] running ifup on interface: vlan40", "[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan20", "[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan40", "[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-10-02 08:29:40,802] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/dcedf467-41a0-4cc2-b500-aa5a4377918a", "", "[2018-10-02 08:29:40,806] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:29:40,807] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/dcedf467-41a0-4cc2-b500-aa5a4377918a.json < /var/lib/heat-config/deployed/dcedf467-41a0-4cc2-b500-aa5a4377918a.notify.json", "[2018-10-02 08:29:41,272] (heat-config) [INFO] ", "[2018-10-02 08:29:41,272] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:29:41,340 p=1004 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-10-02 08:29:41,340 p=1004 u=mistral | Tuesday 02 October 2018 08:29:41 -0400 (0:00:29.517) 0:00:54.074 ******* >2018-10-02 08:29:41,407 p=1004 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:12,054] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/dcedf467-41a0-4cc2-b500-aa5a4377918a.json", > "[2018-10-02 08:29:40,806] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.31/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.31/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-ex\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: br-ex\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2\\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-ex\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:35 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 08:29:40,806] (heat-config) [DEBUG] [2018-10-02 08:29:12,081] (heat-config) [INFO] interface_name=nic1", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] bridge_name=br-ex", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6lxm3zwfyvyb-0-as5h5kvla5s5-NetworkDeployment-mj7fxli7xvf2-TripleOSoftwareDeployment-hiu4ksuunz2j/1076a213-27f5-4274-8224-513c471da110", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:29:12,082] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/dcedf467-41a0-4cc2-b500-aa5a4377918a", > "[2018-10-02 08:29:40,801] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-10-02 08:29:40,802] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.31/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.31/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.", > "[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.", > "[2018/10/02 08:29:12 AM] [INFO] Finding active nics", > "[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic", > "[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic", > "[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic", > "[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic", > "[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2", > "[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1", > "[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0", > "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0", > "[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0", > "[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated", > "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1", > "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20", > "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30", > "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40", > "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50", > "[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-ex", > "[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: br-ex", > "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2", > "[2018/10/02 08:29:12 AM] [INFO] applying network configs...", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-ex", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-ex", > "[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth2", > "[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth1", > "[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: eth0", > "[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan50", > "[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan20", > "[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 08:29:35 AM] [INFO] running ifup on interface: vlan40", > "[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan20", > "[2018/10/02 08:29:39 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan40", > "[2018/10/02 08:29:40 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-10-02 08:29:40,802] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/dcedf467-41a0-4cc2-b500-aa5a4377918a", > "", > "[2018-10-02 08:29:40,806] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:29:40,807] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/dcedf467-41a0-4cc2-b500-aa5a4377918a.json < /var/lib/heat-config/deployed/dcedf467-41a0-4cc2-b500-aa5a4377918a.notify.json", > "[2018-10-02 08:29:41,272] (heat-config) [INFO] ", > "[2018-10-02 08:29:41,272] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:29:41,429 p=1004 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:12,057] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/4573d575-a235-4a9b-84b2-f082c2b86c1c.json", > "[2018-10-02 08:29:27,067] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 08:29:17 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 08:29:27,067] (heat-config) [DEBUG] [2018-10-02 08:29:12,080] (heat-config) [INFO] interface_name=nic1", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] bridge_name=br-ex", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-53btivfojecp-0-r67qhgbpx2gg-NetworkDeployment-5facrk3a3uwg-TripleOSoftwareDeployment-yikex5dxnnvh/768a0d72-8ef8-4651-8bbe-d4c60d20e2d7", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:29:12,081] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:29:12,081] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/4573d575-a235-4a9b-84b2-f082c2b86c1c", > "[2018-10-02 08:29:27,063] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-10-02 08:29:27,063] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.", > "[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.", > "[2018/10/02 08:29:12 AM] [INFO] Finding active nics", > "[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic", > "[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic", > "[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic", > "[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic", > "[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2", > "[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1", > "[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0", > "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0", > "[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0", > "[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated", > "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1", > "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30", > "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan40", > "[2018/10/02 08:29:12 AM] [INFO] applying network configs...", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan40", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth1", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth0", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan40", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1", > "[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth0", > "[2018/10/02 08:29:17 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan40", > "[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan40", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-10-02 08:29:27,063] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/4573d575-a235-4a9b-84b2-f082c2b86c1c", > "", > "[2018-10-02 08:29:27,067] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:29:27,068] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4573d575-a235-4a9b-84b2-f082c2b86c1c.json < /var/lib/heat-config/deployed/4573d575-a235-4a9b-84b2-f082c2b86c1c.notify.json", > "[2018-10-02 08:29:27,495] (heat-config) [INFO] ", > "[2018-10-02 08:29:27,496] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:29:41,528 p=1004 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:12,095] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c216e59d-d4de-46df-8925-341ebb100a51.json", > "[2018-10-02 08:29:31,868] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.\\n[2018/10/02 08:29:12 AM] [INFO] Finding active nics\\n[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2\\n[2018/10/02 08:29:12 AM] [INFO] applying network configs...\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 08:29:14 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 08:29:30 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 08:29:31,868] (heat-config) [DEBUG] [2018-10-02 08:29:12,120] (heat-config) [INFO] interface_name=nic1", > "[2018-10-02 08:29:12,120] (heat-config) [INFO] bridge_name=br-ex", > "[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", > "[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-trdtsryyz37p-0-5bmhuuygu7de-NetworkDeployment-wxqapq2sz5vt-TripleOSoftwareDeployment-jh64jyl4m6s4/fba4136c-14b9-4a96-a9bd-a118119d1484", > "[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:29:12,120] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:29:12,120] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c216e59d-d4de-46df-8925-341ebb100a51", > "[2018-10-02 08:29:31,863] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-10-02 08:29:31,863] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/10/02 08:29:12 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/10/02 08:29:12 AM] [INFO] Ifcfg net config provider created.", > "[2018/10/02 08:29:12 AM] [INFO] Not using any mapping file.", > "[2018/10/02 08:29:12 AM] [INFO] Finding active nics", > "[2018/10/02 08:29:12 AM] [INFO] lo is not an active nic", > "[2018/10/02 08:29:12 AM] [INFO] eth0 is an embedded active nic", > "[2018/10/02 08:29:12 AM] [INFO] eth1 is an embedded active nic", > "[2018/10/02 08:29:12 AM] [INFO] eth2 is an embedded active nic", > "[2018/10/02 08:29:12 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/10/02 08:29:12 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/10/02 08:29:12 AM] [INFO] nic3 mapped to: eth2", > "[2018/10/02 08:29:12 AM] [INFO] nic2 mapped to: eth1", > "[2018/10/02 08:29:12 AM] [INFO] nic1 mapped to: eth0", > "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth0", > "[2018/10/02 08:29:12 AM] [INFO] adding custom route for interface: eth0", > "[2018/10/02 08:29:12 AM] [INFO] adding bridge: br-isolated", > "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth1", > "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan20", > "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan30", > "[2018/10/02 08:29:12 AM] [INFO] adding vlan: vlan50", > "[2018/10/02 08:29:12 AM] [INFO] adding interface: eth2", > "[2018/10/02 08:29:12 AM] [INFO] applying network configs...", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan20", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: vlan50", > "[2018/10/02 08:29:12 AM] [INFO] running ifdown on interface: eth2", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth1", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: eth0", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan20", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on interface: vlan50", > "[2018/10/02 08:29:13 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/10/02 08:29:13 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/10/02 08:29:13 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth2", > "[2018/10/02 08:29:13 AM] [INFO] running ifup on interface: eth1", > "[2018/10/02 08:29:14 AM] [INFO] running ifup on interface: eth0", > "[2018/10/02 08:29:18 AM] [INFO] running ifup on interface: vlan20", > "[2018/10/02 08:29:22 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 08:29:26 AM] [INFO] running ifup on interface: vlan50", > "[2018/10/02 08:29:30 AM] [INFO] running ifup on interface: vlan20", > "[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 08:29:31 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-10-02 08:29:31,863] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c216e59d-d4de-46df-8925-341ebb100a51", > "", > "[2018-10-02 08:29:31,868] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:29:31,869] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c216e59d-d4de-46df-8925-341ebb100a51.json < /var/lib/heat-config/deployed/c216e59d-d4de-46df-8925-341ebb100a51.notify.json", > "[2018-10-02 08:29:32,335] (heat-config) [INFO] ", > "[2018-10-02 08:29:32,336] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:29:41,558 p=1004 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:29:41,559 p=1004 u=mistral | Tuesday 02 October 2018 08:29:41 -0400 (0:00:00.218) 0:00:54.292 ******* >2018-10-02 08:29:41,591 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:41,619 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:41,628 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:41,654 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:29:41,655 p=1004 u=mistral | Tuesday 02 October 2018 08:29:41 -0400 (0:00:00.095) 0:00:54.388 ******* >2018-10-02 08:29:41,711 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "56179282-029a-4c31-a87f-f661e0a3fa9d"}, "changed": false} >2018-10-02 08:29:41,737 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:29:41,737 p=1004 u=mistral | Tuesday 02 October 2018 08:29:41 -0400 (0:00:00.082) 0:00:54.471 ******* >2018-10-02 08:29:41,795 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:29:41,823 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:29:41,824 p=1004 u=mistral | Tuesday 02 October 2018 08:29:41 -0400 (0:00:00.086) 0:00:54.557 ******* >2018-10-02 08:29:41,844 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:41,870 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:29:41,870 p=1004 u=mistral | Tuesday 02 October 2018 08:29:41 -0400 (0:00:00.046) 0:00:54.604 ******* >2018-10-02 08:29:41,890 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:41,914 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:29:41,914 p=1004 u=mistral | Tuesday 02 October 2018 08:29:41 -0400 (0:00:00.044) 0:00:54.648 ******* >2018-10-02 08:29:41,935 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:41,962 p=1004 u=mistral | TASK [Render deployment file for ControllerUpgradeInitDeployment for check-mode] *** >2018-10-02 08:29:41,962 p=1004 u=mistral | Tuesday 02 October 2018 08:29:41 -0400 (0:00:00.047) 0:00:54.695 ******* >2018-10-02 08:29:41,987 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:42,014 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:29:42,015 p=1004 u=mistral | Tuesday 02 October 2018 08:29:42 -0400 (0:00:00.052) 0:00:54.748 ******* >2018-10-02 08:29:42,032 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:42,054 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:29:42,054 p=1004 u=mistral | Tuesday 02 October 2018 08:29:42 -0400 (0:00:00.039) 0:00:54.788 ******* >2018-10-02 08:29:42,071 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:42,093 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:42,093 p=1004 u=mistral | Tuesday 02 October 2018 08:29:42 -0400 (0:00:00.038) 0:00:54.826 ******* >2018-10-02 08:29:42,113 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:42,135 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:42,135 p=1004 u=mistral | Tuesday 02 October 2018 08:29:42 -0400 (0:00:00.042) 0:00:54.868 ******* >2018-10-02 08:29:42,155 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:42,178 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:29:42,178 p=1004 u=mistral | Tuesday 02 October 2018 08:29:42 -0400 (0:00:00.042) 0:00:54.911 ******* >2018-10-02 08:29:42,197 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:42,219 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:29:42,219 p=1004 u=mistral | Tuesday 02 October 2018 08:29:42 -0400 (0:00:00.041) 0:00:54.952 ******* >2018-10-02 08:29:42,236 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:42,260 p=1004 u=mistral | TASK [Render deployment file for ControllerUpgradeInitDeployment] ************** >2018-10-02 08:29:42,260 p=1004 u=mistral | Tuesday 02 October 2018 08:29:42 -0400 (0:00:00.041) 0:00:54.994 ******* >2018-10-02 08:29:42,859 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "37f581260adb55c5b99827041fd1dc78ce7335f2", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerUpgradeInitDeployment-56179282-029a-4c31-a87f-f661e0a3fa9d", "gid": 0, "group": "root", "md5sum": "56c9e4cfdf9cc4299e53d4e6e6df2e40", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1183, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483382.38-93659731396227/source", "state": "file", "uid": 0} >2018-10-02 08:29:42,891 p=1004 u=mistral | TASK [Check if deployed file exists for ControllerUpgradeInitDeployment] ******* >2018-10-02 08:29:42,892 p=1004 u=mistral | Tuesday 02 October 2018 08:29:42 -0400 (0:00:00.631) 0:00:55.625 ******* >2018-10-02 08:29:43,176 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:43,206 p=1004 u=mistral | TASK [Check previous deployment rc for ControllerUpgradeInitDeployment] ******** >2018-10-02 08:29:43,206 p=1004 u=mistral | Tuesday 02 October 2018 08:29:43 -0400 (0:00:00.314) 0:00:55.939 ******* >2018-10-02 08:29:43,227 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:43,256 p=1004 u=mistral | TASK [Remove deployed file for ControllerUpgradeInitDeployment when previous deployment failed] *** >2018-10-02 08:29:43,256 p=1004 u=mistral | Tuesday 02 October 2018 08:29:43 -0400 (0:00:00.050) 0:00:55.989 ******* >2018-10-02 08:29:43,277 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:43,305 p=1004 u=mistral | TASK [Force remove deployed file for ControllerUpgradeInitDeployment] ********** >2018-10-02 08:29:43,305 p=1004 u=mistral | Tuesday 02 October 2018 08:29:43 -0400 (0:00:00.049) 0:00:56.039 ******* >2018-10-02 08:29:43,326 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:43,353 p=1004 u=mistral | TASK [Run deployment ControllerUpgradeInitDeployment] ************************** >2018-10-02 08:29:43,353 p=1004 u=mistral | Tuesday 02 October 2018 08:29:43 -0400 (0:00:00.047) 0:00:56.086 ******* >2018-10-02 08:29:44,125 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/56179282-029a-4c31-a87f-f661e0a3fa9d.notify.json)", "delta": "0:00:00.499918", "end": "2018-10-02 08:29:44.105340", "rc": 0, "start": "2018-10-02 08:29:43.605422", "stderr": "[2018-10-02 08:29:43,633] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/56179282-029a-4c31-a87f-f661e0a3fa9d.json\n[2018-10-02 08:29:43,665] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:29:43,665] (heat-config) [DEBUG] [2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf\n[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6lxm3zwfyvyb-0-as5h5kvla5s5-ControllerUpgradeInitDeployment-fo3vi4yxaxre/8dae7fe7-7ffa-4834-9cf1-332454db5009\n[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:29:43,658] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/56179282-029a-4c31-a87f-f661e0a3fa9d\n[2018-10-02 08:29:43,662] (heat-config) [INFO] \n[2018-10-02 08:29:43,662] (heat-config) [DEBUG] \n[2018-10-02 08:29:43,662] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/56179282-029a-4c31-a87f-f661e0a3fa9d\n\n[2018-10-02 08:29:43,665] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:29:43,666] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/56179282-029a-4c31-a87f-f661e0a3fa9d.json < /var/lib/heat-config/deployed/56179282-029a-4c31-a87f-f661e0a3fa9d.notify.json\n[2018-10-02 08:29:44,098] (heat-config) [INFO] \n[2018-10-02 08:29:44,098] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:43,633] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/56179282-029a-4c31-a87f-f661e0a3fa9d.json", "[2018-10-02 08:29:43,665] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:29:43,665] (heat-config) [DEBUG] [2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", "[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6lxm3zwfyvyb-0-as5h5kvla5s5-ControllerUpgradeInitDeployment-fo3vi4yxaxre/8dae7fe7-7ffa-4834-9cf1-332454db5009", "[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:29:43,658] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/56179282-029a-4c31-a87f-f661e0a3fa9d", "[2018-10-02 08:29:43,662] (heat-config) [INFO] ", "[2018-10-02 08:29:43,662] (heat-config) [DEBUG] ", "[2018-10-02 08:29:43,662] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/56179282-029a-4c31-a87f-f661e0a3fa9d", "", "[2018-10-02 08:29:43,665] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:29:43,666] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/56179282-029a-4c31-a87f-f661e0a3fa9d.json < /var/lib/heat-config/deployed/56179282-029a-4c31-a87f-f661e0a3fa9d.notify.json", "[2018-10-02 08:29:44,098] (heat-config) [INFO] ", "[2018-10-02 08:29:44,098] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:29:44,157 p=1004 u=mistral | TASK [Output for ControllerUpgradeInitDeployment] ****************************** >2018-10-02 08:29:44,157 p=1004 u=mistral | Tuesday 02 October 2018 08:29:44 -0400 (0:00:00.804) 0:00:56.891 ******* >2018-10-02 08:29:44,278 p=1004 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:43,633] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/56179282-029a-4c31-a87f-f661e0a3fa9d.json", > "[2018-10-02 08:29:43,665] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:29:43,665] (heat-config) [DEBUG] [2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", > "[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6lxm3zwfyvyb-0-as5h5kvla5s5-ControllerUpgradeInitDeployment-fo3vi4yxaxre/8dae7fe7-7ffa-4834-9cf1-332454db5009", > "[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:29:43,657] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:29:43,658] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/56179282-029a-4c31-a87f-f661e0a3fa9d", > "[2018-10-02 08:29:43,662] (heat-config) [INFO] ", > "[2018-10-02 08:29:43,662] (heat-config) [DEBUG] ", > "[2018-10-02 08:29:43,662] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/56179282-029a-4c31-a87f-f661e0a3fa9d", > "", > "[2018-10-02 08:29:43,665] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:29:43,666] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/56179282-029a-4c31-a87f-f661e0a3fa9d.json < /var/lib/heat-config/deployed/56179282-029a-4c31-a87f-f661e0a3fa9d.notify.json", > "[2018-10-02 08:29:44,098] (heat-config) [INFO] ", > "[2018-10-02 08:29:44,098] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:29:44,354 p=1004 u=mistral | TASK [Check-mode for Run deployment ControllerUpgradeInitDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:29:44,355 p=1004 u=mistral | Tuesday 02 October 2018 08:29:44 -0400 (0:00:00.197) 0:00:57.088 ******* >2018-10-02 08:29:44,371 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:44,398 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:29:44,399 p=1004 u=mistral | Tuesday 02 October 2018 08:29:44 -0400 (0:00:00.043) 0:00:57.132 ******* >2018-10-02 08:29:44,462 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "7e101e56-ed57-4f66-be4c-6aa2207a8b85"}, "changed": false} >2018-10-02 08:29:44,487 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "29ca094e-5c08-46ff-880c-2abd3db6623d"}, "changed": false} >2018-10-02 08:29:44,517 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "5259d000-2899-471b-897f-e026ec2177d5"}, "changed": false} >2018-10-02 08:29:44,544 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:29:44,545 p=1004 u=mistral | Tuesday 02 October 2018 08:29:44 -0400 (0:00:00.145) 0:00:57.278 ******* >2018-10-02 08:29:44,614 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:29:44,637 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:29:44,673 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:29:44,699 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:29:44,699 p=1004 u=mistral | Tuesday 02 October 2018 08:29:44 -0400 (0:00:00.154) 0:00:57.433 ******* >2018-10-02 08:29:44,728 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:44,756 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:44,769 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:44,795 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:29:44,795 p=1004 u=mistral | Tuesday 02 October 2018 08:29:44 -0400 (0:00:00.095) 0:00:57.529 ******* >2018-10-02 08:29:44,826 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:44,853 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:44,867 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:44,892 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:29:44,892 p=1004 u=mistral | Tuesday 02 October 2018 08:29:44 -0400 (0:00:00.096) 0:00:57.626 ******* >2018-10-02 08:29:44,922 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:44,948 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:44,960 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:44,987 p=1004 u=mistral | TASK [Render deployment file for CADeployment for check-mode] ****************** >2018-10-02 08:29:44,987 p=1004 u=mistral | Tuesday 02 October 2018 08:29:44 -0400 (0:00:00.095) 0:00:57.721 ******* >2018-10-02 08:29:45,016 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,040 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,058 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,086 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:29:45,086 p=1004 u=mistral | Tuesday 02 October 2018 08:29:45 -0400 (0:00:00.098) 0:00:57.819 ******* >2018-10-02 08:29:45,115 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,142 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,154 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,180 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:29:45,180 p=1004 u=mistral | Tuesday 02 October 2018 08:29:45 -0400 (0:00:00.093) 0:00:57.913 ******* >2018-10-02 08:29:45,210 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,237 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,250 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,275 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:45,276 p=1004 u=mistral | Tuesday 02 October 2018 08:29:45 -0400 (0:00:00.095) 0:00:58.009 ******* >2018-10-02 08:29:45,307 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,333 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,349 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,374 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:45,374 p=1004 u=mistral | Tuesday 02 October 2018 08:29:45 -0400 (0:00:00.098) 0:00:58.107 ******* >2018-10-02 08:29:45,406 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:45,434 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:29:45,450 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:29:45,476 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:29:45,477 p=1004 u=mistral | Tuesday 02 October 2018 08:29:45 -0400 (0:00:00.102) 0:00:58.210 ******* >2018-10-02 08:29:45,506 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,534 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,548 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:45,574 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:29:45,574 p=1004 u=mistral | Tuesday 02 October 2018 08:29:45 -0400 (0:00:00.097) 0:00:58.307 ******* >2018-10-02 08:29:45,603 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:45,630 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:29:45,642 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:29:45,670 p=1004 u=mistral | TASK [Render deployment file for CADeployment] ********************************* >2018-10-02 08:29:45,670 p=1004 u=mistral | Tuesday 02 October 2018 08:29:45 -0400 (0:00:00.096) 0:00:58.404 ******* >2018-10-02 08:29:46,244 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "83e25980fe9de8062a6ed1bf9ae6e65e50632796", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-7e101e56-ed57-4f66-be4c-6aa2207a8b85", "gid": 0, "group": "root", "md5sum": "57be8b1fc6d59cd3870a4212411d2722", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2999, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483385.73-223443505555675/source", "state": "file", "uid": 0} >2018-10-02 08:29:46,304 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "c43969d6654dbafde7490e8bbf407f9afd6ccd3e", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-5259d000-2899-471b-897f-e026ec2177d5", "gid": 0, "group": "root", "md5sum": "68b250f4e51a09b73b0d916e27ba7569", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2996, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483385.79-12090999560639/source", "state": "file", "uid": 0} >2018-10-02 08:29:46,315 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "bf59c5c1fdc89e40dc3bbd13b9a95f17e80ec53a", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-29ca094e-5c08-46ff-880c-2abd3db6623d", "gid": 0, "group": "root", "md5sum": "af690f39d5e505192fcea70220adb93a", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 3000, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483385.76-145893788768363/source", "state": "file", "uid": 0} >2018-10-02 08:29:46,343 p=1004 u=mistral | TASK [Check if deployed file exists for CADeployment] ************************** >2018-10-02 08:29:46,343 p=1004 u=mistral | Tuesday 02 October 2018 08:29:46 -0400 (0:00:00.672) 0:00:59.077 ******* >2018-10-02 08:29:46,539 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:46,581 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:46,601 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:46,628 p=1004 u=mistral | TASK [Check previous deployment rc for CADeployment] *************************** >2018-10-02 08:29:46,628 p=1004 u=mistral | Tuesday 02 October 2018 08:29:46 -0400 (0:00:00.284) 0:00:59.361 ******* >2018-10-02 08:29:46,657 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:46,681 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:46,697 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:46,723 p=1004 u=mistral | TASK [Remove deployed file for CADeployment when previous deployment failed] *** >2018-10-02 08:29:46,723 p=1004 u=mistral | Tuesday 02 October 2018 08:29:46 -0400 (0:00:00.095) 0:00:59.457 ******* >2018-10-02 08:29:46,751 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:46,775 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:46,789 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:46,813 p=1004 u=mistral | TASK [Force remove deployed file for CADeployment] ***************************** >2018-10-02 08:29:46,813 p=1004 u=mistral | Tuesday 02 October 2018 08:29:46 -0400 (0:00:00.089) 0:00:59.547 ******* >2018-10-02 08:29:46,840 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:46,866 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:46,878 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:46,902 p=1004 u=mistral | TASK [Run deployment CADeployment] ********************************************* >2018-10-02 08:29:46,902 p=1004 u=mistral | Tuesday 02 October 2018 08:29:46 -0400 (0:00:00.089) 0:00:59.636 ******* >2018-10-02 08:29:48,348 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/7e101e56-ed57-4f66-be4c-6aa2207a8b85.notify.json)", "delta": "0:00:01.226914", "end": "2018-10-02 08:29:48.325301", "rc": 0, "start": "2018-10-02 08:29:47.098387", "stderr": "[2018-10-02 08:29:47,128] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7e101e56-ed57-4f66-be4c-6aa2207a8b85.json\n[2018-10-02 08:29:47,895] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-10-02 08:29:47,896] (heat-config) [DEBUG] [2018-10-02 08:29:47,153] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-10-02 08:29:47,153] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ\n17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk\nEmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO\ncX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF\nSoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL\n/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O\nBBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o\nF8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL\ngT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX\nuUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9\nfkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny\nP8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh\nA3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy\nSju7PiEvw2a6evE=\n-----END CERTIFICATE-----\n[2018-10-02 08:29:47,153] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-10-02 08:29:47,153] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf\n[2018-10-02 08:29:47,153] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:29:47,154] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6lxm3zwfyvyb-0-as5h5kvla5s5-NodeTLSCAData-j5ypsv7er5zj-CADeployment-assosh5fwrsd/834f8bea-5eb1-452a-b735-9776e5a67c32\n[2018-10-02 08:29:47,154] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:29:47,154] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:29:47,154] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7e101e56-ed57-4f66-be4c-6aa2207a8b85\n[2018-10-02 08:29:47,891] (heat-config) [INFO] \n[2018-10-02 08:29:47,891] (heat-config) [DEBUG] \n[2018-10-02 08:29:47,891] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7e101e56-ed57-4f66-be4c-6aa2207a8b85\n\n[2018-10-02 08:29:47,896] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:29:47,896] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7e101e56-ed57-4f66-be4c-6aa2207a8b85.json < /var/lib/heat-config/deployed/7e101e56-ed57-4f66-be4c-6aa2207a8b85.notify.json\n[2018-10-02 08:29:48,318] (heat-config) [INFO] \n[2018-10-02 08:29:48,318] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:47,128] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7e101e56-ed57-4f66-be4c-6aa2207a8b85.json", "[2018-10-02 08:29:47,895] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-10-02 08:29:47,896] (heat-config) [DEBUG] [2018-10-02 08:29:47,153] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-10-02 08:29:47,153] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", "Sju7PiEvw2a6evE=", "-----END CERTIFICATE-----", "[2018-10-02 08:29:47,153] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-10-02 08:29:47,153] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", "[2018-10-02 08:29:47,153] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:29:47,154] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6lxm3zwfyvyb-0-as5h5kvla5s5-NodeTLSCAData-j5ypsv7er5zj-CADeployment-assosh5fwrsd/834f8bea-5eb1-452a-b735-9776e5a67c32", "[2018-10-02 08:29:47,154] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:29:47,154] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:29:47,154] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7e101e56-ed57-4f66-be4c-6aa2207a8b85", "[2018-10-02 08:29:47,891] (heat-config) [INFO] ", "[2018-10-02 08:29:47,891] (heat-config) [DEBUG] ", "[2018-10-02 08:29:47,891] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7e101e56-ed57-4f66-be4c-6aa2207a8b85", "", "[2018-10-02 08:29:47,896] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:29:47,896] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7e101e56-ed57-4f66-be4c-6aa2207a8b85.json < /var/lib/heat-config/deployed/7e101e56-ed57-4f66-be4c-6aa2207a8b85.notify.json", "[2018-10-02 08:29:48,318] (heat-config) [INFO] ", "[2018-10-02 08:29:48,318] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:29:48,387 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/29ca094e-5c08-46ff-880c-2abd3db6623d.notify.json)", "delta": "0:00:01.238704", "end": "2018-10-02 08:29:48.365276", "rc": 0, "start": "2018-10-02 08:29:47.126572", "stderr": "[2018-10-02 08:29:47,154] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/29ca094e-5c08-46ff-880c-2abd3db6623d.json\n[2018-10-02 08:29:47,927] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-10-02 08:29:47,927] (heat-config) [DEBUG] [2018-10-02 08:29:47,179] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-10-02 08:29:47,180] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ\n17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk\nEmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO\ncX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF\nSoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL\n/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O\nBBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o\nF8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL\ngT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX\nuUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9\nfkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny\nP8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh\nA3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy\nSju7PiEvw2a6evE=\n-----END CERTIFICATE-----\n[2018-10-02 08:29:47,180] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866\n[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-53btivfojecp-0-r67qhgbpx2gg-NodeTLSCAData-unfc4cencvoz-CADeployment-xtinzodxrtrt/8958a6bd-6e76-48db-8450-be3d9f3a1788\n[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:29:47,180] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/29ca094e-5c08-46ff-880c-2abd3db6623d\n[2018-10-02 08:29:47,923] (heat-config) [INFO] \n[2018-10-02 08:29:47,923] (heat-config) [DEBUG] \n[2018-10-02 08:29:47,923] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/29ca094e-5c08-46ff-880c-2abd3db6623d\n\n[2018-10-02 08:29:47,927] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:29:47,928] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/29ca094e-5c08-46ff-880c-2abd3db6623d.json < /var/lib/heat-config/deployed/29ca094e-5c08-46ff-880c-2abd3db6623d.notify.json\n[2018-10-02 08:29:48,359] (heat-config) [INFO] \n[2018-10-02 08:29:48,359] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:47,154] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/29ca094e-5c08-46ff-880c-2abd3db6623d.json", "[2018-10-02 08:29:47,927] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-10-02 08:29:47,927] (heat-config) [DEBUG] [2018-10-02 08:29:47,179] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-10-02 08:29:47,180] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", "Sju7PiEvw2a6evE=", "-----END CERTIFICATE-----", "[2018-10-02 08:29:47,180] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", "[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-53btivfojecp-0-r67qhgbpx2gg-NodeTLSCAData-unfc4cencvoz-CADeployment-xtinzodxrtrt/8958a6bd-6e76-48db-8450-be3d9f3a1788", "[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:29:47,180] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/29ca094e-5c08-46ff-880c-2abd3db6623d", "[2018-10-02 08:29:47,923] (heat-config) [INFO] ", "[2018-10-02 08:29:47,923] (heat-config) [DEBUG] ", "[2018-10-02 08:29:47,923] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/29ca094e-5c08-46ff-880c-2abd3db6623d", "", "[2018-10-02 08:29:47,927] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:29:47,928] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/29ca094e-5c08-46ff-880c-2abd3db6623d.json < /var/lib/heat-config/deployed/29ca094e-5c08-46ff-880c-2abd3db6623d.notify.json", "[2018-10-02 08:29:48,359] (heat-config) [INFO] ", "[2018-10-02 08:29:48,359] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:29:48,410 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/5259d000-2899-471b-897f-e026ec2177d5.notify.json)", "delta": "0:00:01.230217", "end": "2018-10-02 08:29:48.389221", "rc": 0, "start": "2018-10-02 08:29:47.159004", "stderr": "[2018-10-02 08:29:47,187] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5259d000-2899-471b-897f-e026ec2177d5.json\n[2018-10-02 08:29:47,959] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-10-02 08:29:47,959] (heat-config) [DEBUG] [2018-10-02 08:29:47,213] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-10-02 08:29:47,213] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ\n17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk\nEmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO\ncX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF\nSoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL\n/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O\nBBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o\nF8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL\ngT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX\nuUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9\nfkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny\nP8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh\nA3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy\nSju7PiEvw2a6evE=\n-----END CERTIFICATE-----\n[2018-10-02 08:29:47,214] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756\n[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-trdtsryyz37p-0-5bmhuuygu7de-NodeTLSCAData-b4w3q4ayrwap-CADeployment-pthtksjee2zc/05372f0b-4c28-4853-9f54-19d696679442\n[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:29:47,214] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5259d000-2899-471b-897f-e026ec2177d5\n[2018-10-02 08:29:47,954] (heat-config) [INFO] \n[2018-10-02 08:29:47,955] (heat-config) [DEBUG] \n[2018-10-02 08:29:47,955] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5259d000-2899-471b-897f-e026ec2177d5\n\n[2018-10-02 08:29:47,959] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:29:47,960] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5259d000-2899-471b-897f-e026ec2177d5.json < /var/lib/heat-config/deployed/5259d000-2899-471b-897f-e026ec2177d5.notify.json\n[2018-10-02 08:29:48,382] (heat-config) [INFO] \n[2018-10-02 08:29:48,383] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:47,187] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5259d000-2899-471b-897f-e026ec2177d5.json", "[2018-10-02 08:29:47,959] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-10-02 08:29:47,959] (heat-config) [DEBUG] [2018-10-02 08:29:47,213] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-10-02 08:29:47,213] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", "Sju7PiEvw2a6evE=", "-----END CERTIFICATE-----", "[2018-10-02 08:29:47,214] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", "[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-trdtsryyz37p-0-5bmhuuygu7de-NodeTLSCAData-b4w3q4ayrwap-CADeployment-pthtksjee2zc/05372f0b-4c28-4853-9f54-19d696679442", "[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:29:47,214] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5259d000-2899-471b-897f-e026ec2177d5", "[2018-10-02 08:29:47,954] (heat-config) [INFO] ", "[2018-10-02 08:29:47,955] (heat-config) [DEBUG] ", "[2018-10-02 08:29:47,955] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5259d000-2899-471b-897f-e026ec2177d5", "", "[2018-10-02 08:29:47,959] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:29:47,960] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5259d000-2899-471b-897f-e026ec2177d5.json < /var/lib/heat-config/deployed/5259d000-2899-471b-897f-e026ec2177d5.notify.json", "[2018-10-02 08:29:48,382] (heat-config) [INFO] ", "[2018-10-02 08:29:48,383] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:29:48,440 p=1004 u=mistral | TASK [Output for CADeployment] ************************************************* >2018-10-02 08:29:48,441 p=1004 u=mistral | Tuesday 02 October 2018 08:29:48 -0400 (0:00:01.538) 0:01:01.174 ******* >2018-10-02 08:29:48,503 p=1004 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:47,128] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7e101e56-ed57-4f66-be4c-6aa2207a8b85.json", > "[2018-10-02 08:29:47,895] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-10-02 08:29:47,896] (heat-config) [DEBUG] [2018-10-02 08:29:47,153] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-10-02 08:29:47,153] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", > "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", > "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", > "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", > "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", > "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", > "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", > "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", > "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", > "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", > "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", > "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", > "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", > "Sju7PiEvw2a6evE=", > "-----END CERTIFICATE-----", > "[2018-10-02 08:29:47,153] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-10-02 08:29:47,153] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", > "[2018-10-02 08:29:47,153] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:29:47,154] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6lxm3zwfyvyb-0-as5h5kvla5s5-NodeTLSCAData-j5ypsv7er5zj-CADeployment-assosh5fwrsd/834f8bea-5eb1-452a-b735-9776e5a67c32", > "[2018-10-02 08:29:47,154] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:29:47,154] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:29:47,154] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7e101e56-ed57-4f66-be4c-6aa2207a8b85", > "[2018-10-02 08:29:47,891] (heat-config) [INFO] ", > "[2018-10-02 08:29:47,891] (heat-config) [DEBUG] ", > "[2018-10-02 08:29:47,891] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7e101e56-ed57-4f66-be4c-6aa2207a8b85", > "", > "[2018-10-02 08:29:47,896] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:29:47,896] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7e101e56-ed57-4f66-be4c-6aa2207a8b85.json < /var/lib/heat-config/deployed/7e101e56-ed57-4f66-be4c-6aa2207a8b85.notify.json", > "[2018-10-02 08:29:48,318] (heat-config) [INFO] ", > "[2018-10-02 08:29:48,318] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:29:48,523 p=1004 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:47,154] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/29ca094e-5c08-46ff-880c-2abd3db6623d.json", > "[2018-10-02 08:29:47,927] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-10-02 08:29:47,927] (heat-config) [DEBUG] [2018-10-02 08:29:47,179] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-10-02 08:29:47,180] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", > "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", > "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", > "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", > "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", > "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", > "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", > "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", > "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", > "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", > "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", > "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", > "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", > "Sju7PiEvw2a6evE=", > "-----END CERTIFICATE-----", > "[2018-10-02 08:29:47,180] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", > "[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-53btivfojecp-0-r67qhgbpx2gg-NodeTLSCAData-unfc4cencvoz-CADeployment-xtinzodxrtrt/8958a6bd-6e76-48db-8450-be3d9f3a1788", > "[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:29:47,180] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:29:47,180] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/29ca094e-5c08-46ff-880c-2abd3db6623d", > "[2018-10-02 08:29:47,923] (heat-config) [INFO] ", > "[2018-10-02 08:29:47,923] (heat-config) [DEBUG] ", > "[2018-10-02 08:29:47,923] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/29ca094e-5c08-46ff-880c-2abd3db6623d", > "", > "[2018-10-02 08:29:47,927] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:29:47,928] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/29ca094e-5c08-46ff-880c-2abd3db6623d.json < /var/lib/heat-config/deployed/29ca094e-5c08-46ff-880c-2abd3db6623d.notify.json", > "[2018-10-02 08:29:48,359] (heat-config) [INFO] ", > "[2018-10-02 08:29:48,359] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:29:48,555 p=1004 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:47,187] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5259d000-2899-471b-897f-e026ec2177d5.json", > "[2018-10-02 08:29:47,959] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-10-02 08:29:47,959] (heat-config) [DEBUG] [2018-10-02 08:29:47,213] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-10-02 08:29:47,213] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", > "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", > "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", > "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", > "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", > "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", > "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", > "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", > "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", > "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", > "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", > "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", > "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", > "Sju7PiEvw2a6evE=", > "-----END CERTIFICATE-----", > "[2018-10-02 08:29:47,214] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", > "[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-trdtsryyz37p-0-5bmhuuygu7de-NodeTLSCAData-b4w3q4ayrwap-CADeployment-pthtksjee2zc/05372f0b-4c28-4853-9f54-19d696679442", > "[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:29:47,214] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:29:47,214] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5259d000-2899-471b-897f-e026ec2177d5", > "[2018-10-02 08:29:47,954] (heat-config) [INFO] ", > "[2018-10-02 08:29:47,955] (heat-config) [DEBUG] ", > "[2018-10-02 08:29:47,955] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5259d000-2899-471b-897f-e026ec2177d5", > "", > "[2018-10-02 08:29:47,959] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:29:47,960] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5259d000-2899-471b-897f-e026ec2177d5.json < /var/lib/heat-config/deployed/5259d000-2899-471b-897f-e026ec2177d5.notify.json", > "[2018-10-02 08:29:48,382] (heat-config) [INFO] ", > "[2018-10-02 08:29:48,383] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:29:48,585 p=1004 u=mistral | TASK [Check-mode for Run deployment CADeployment (changed status indicates deployment would run)] *** >2018-10-02 08:29:48,585 p=1004 u=mistral | Tuesday 02 October 2018 08:29:48 -0400 (0:00:00.144) 0:01:01.318 ******* >2018-10-02 08:29:48,616 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:48,642 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:48,651 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:48,677 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:29:48,677 p=1004 u=mistral | Tuesday 02 October 2018 08:29:48 -0400 (0:00:00.092) 0:01:01.410 ******* >2018-10-02 08:29:49,048 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "7bd91d2a-e9d1-4853-85b3-6b78a537b1ca"}, "changed": false} >2018-10-02 08:29:49,075 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:29:49,075 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.397) 0:01:01.808 ******* >2018-10-02 08:29:49,449 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 08:29:49,476 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:29:49,476 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.401) 0:01:02.210 ******* >2018-10-02 08:29:49,495 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:49,522 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:29:49,523 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.046) 0:01:02.256 ******* >2018-10-02 08:29:49,543 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:49,569 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:29:49,569 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.046) 0:01:02.302 ******* >2018-10-02 08:29:49,593 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:49,623 p=1004 u=mistral | TASK [Render deployment file for ControllerDeployment for check-mode] ********** >2018-10-02 08:29:49,623 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.053) 0:01:02.356 ******* >2018-10-02 08:29:49,643 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:49,670 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:29:49,671 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.047) 0:01:02.404 ******* >2018-10-02 08:29:49,691 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:49,717 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:29:49,717 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.046) 0:01:02.451 ******* >2018-10-02 08:29:49,736 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:49,761 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:49,761 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.043) 0:01:02.494 ******* >2018-10-02 08:29:49,780 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:49,804 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:49,805 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.043) 0:01:02.538 ******* >2018-10-02 08:29:49,826 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:49,849 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:29:49,850 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.045) 0:01:02.583 ******* >2018-10-02 08:29:49,869 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:49,893 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:29:49,893 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.043) 0:01:02.626 ******* >2018-10-02 08:29:49,910 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:49,933 p=1004 u=mistral | TASK [Render deployment file for ControllerDeployment] ************************* >2018-10-02 08:29:49,934 p=1004 u=mistral | Tuesday 02 October 2018 08:29:49 -0400 (0:00:00.040) 0:01:02.667 ******* >2018-10-02 08:29:50,902 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "45769163d9c0044705b845e0f793f5f71c2e3832", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerDeployment-7bd91d2a-e9d1-4853-85b3-6b78a537b1ca", "gid": 0, "group": "root", "md5sum": "71233f2cd5a31b04c9bcbcfe7cbc5107", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 73843, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483390.4-52776443159113/source", "state": "file", "uid": 0} >2018-10-02 08:29:50,929 p=1004 u=mistral | TASK [Check if deployed file exists for ControllerDeployment] ****************** >2018-10-02 08:29:50,929 p=1004 u=mistral | Tuesday 02 October 2018 08:29:50 -0400 (0:00:00.995) 0:01:03.662 ******* >2018-10-02 08:29:51,131 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:51,159 p=1004 u=mistral | TASK [Check previous deployment rc for ControllerDeployment] ******************* >2018-10-02 08:29:51,159 p=1004 u=mistral | Tuesday 02 October 2018 08:29:51 -0400 (0:00:00.230) 0:01:03.892 ******* >2018-10-02 08:29:51,175 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:51,201 p=1004 u=mistral | TASK [Remove deployed file for ControllerDeployment when previous deployment failed] *** >2018-10-02 08:29:51,201 p=1004 u=mistral | Tuesday 02 October 2018 08:29:51 -0400 (0:00:00.041) 0:01:03.934 ******* >2018-10-02 08:29:51,220 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:51,249 p=1004 u=mistral | TASK [Force remove deployed file for ControllerDeployment] ********************* >2018-10-02 08:29:51,249 p=1004 u=mistral | Tuesday 02 October 2018 08:29:51 -0400 (0:00:00.048) 0:01:03.982 ******* >2018-10-02 08:29:51,265 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:51,290 p=1004 u=mistral | TASK [Run deployment ControllerDeployment] ************************************* >2018-10-02 08:29:51,291 p=1004 u=mistral | Tuesday 02 October 2018 08:29:51 -0400 (0:00:00.041) 0:01:04.024 ******* >2018-10-02 08:29:52,084 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/7bd91d2a-e9d1-4853-85b3-6b78a537b1ca.notify.json)", "delta": "0:00:00.529710", "end": "2018-10-02 08:29:52.064305", "rc": 0, "start": "2018-10-02 08:29:51.534595", "stderr": "[2018-10-02 08:29:51,566] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/7bd91d2a-e9d1-4853-85b3-6b78a537b1ca.json\n[2018-10-02 08:29:51,697] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:29:51,697] (heat-config) [DEBUG] \n[2018-10-02 08:29:51,697] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 08:29:51,697] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7bd91d2a-e9d1-4853-85b3-6b78a537b1ca.json < /var/lib/heat-config/deployed/7bd91d2a-e9d1-4853-85b3-6b78a537b1ca.notify.json\n[2018-10-02 08:29:52,057] (heat-config) [INFO] \n[2018-10-02 08:29:52,057] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:51,566] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/7bd91d2a-e9d1-4853-85b3-6b78a537b1ca.json", "[2018-10-02 08:29:51,697] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:29:51,697] (heat-config) [DEBUG] ", "[2018-10-02 08:29:51,697] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 08:29:51,697] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7bd91d2a-e9d1-4853-85b3-6b78a537b1ca.json < /var/lib/heat-config/deployed/7bd91d2a-e9d1-4853-85b3-6b78a537b1ca.notify.json", "[2018-10-02 08:29:52,057] (heat-config) [INFO] ", "[2018-10-02 08:29:52,057] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:29:52,112 p=1004 u=mistral | TASK [Output for ControllerDeployment] ***************************************** >2018-10-02 08:29:52,112 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.821) 0:01:04.846 ******* >2018-10-02 08:29:52,216 p=1004 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:51,566] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/7bd91d2a-e9d1-4853-85b3-6b78a537b1ca.json", > "[2018-10-02 08:29:51,697] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:29:51,697] (heat-config) [DEBUG] ", > "[2018-10-02 08:29:51,697] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 08:29:51,697] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7bd91d2a-e9d1-4853-85b3-6b78a537b1ca.json < /var/lib/heat-config/deployed/7bd91d2a-e9d1-4853-85b3-6b78a537b1ca.notify.json", > "[2018-10-02 08:29:52,057] (heat-config) [INFO] ", > "[2018-10-02 08:29:52,057] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:29:52,242 p=1004 u=mistral | TASK [Check-mode for Run deployment ControllerDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:29:52,242 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.130) 0:01:04.976 ******* >2018-10-02 08:29:52,259 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:52,283 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:29:52,283 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.041) 0:01:05.017 ******* >2018-10-02 08:29:52,407 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "79001ad4-6ed7-4598-9ca3-5387b61831ea"}, "changed": false} >2018-10-02 08:29:52,428 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:29:52,429 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.145) 0:01:05.162 ******* >2018-10-02 08:29:52,553 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:29:52,580 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:29:52,580 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.151) 0:01:05.314 ******* >2018-10-02 08:29:52,670 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:52,752 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:29:52,753 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.172) 0:01:05.486 ******* >2018-10-02 08:29:52,773 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:52,799 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:29:52,799 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.046) 0:01:05.532 ******* >2018-10-02 08:29:52,818 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:52,844 p=1004 u=mistral | TASK [Render deployment file for ControllerHostsDeployment for check-mode] ***** >2018-10-02 08:29:52,844 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.045) 0:01:05.578 ******* >2018-10-02 08:29:52,863 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:52,886 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:29:52,886 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.041) 0:01:05.620 ******* >2018-10-02 08:29:52,904 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:52,928 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:29:52,928 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.041) 0:01:05.662 ******* >2018-10-02 08:29:52,946 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:52,969 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:52,970 p=1004 u=mistral | Tuesday 02 October 2018 08:29:52 -0400 (0:00:00.041) 0:01:05.703 ******* >2018-10-02 08:29:52,989 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:53,016 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:53,017 p=1004 u=mistral | Tuesday 02 October 2018 08:29:53 -0400 (0:00:00.047) 0:01:05.750 ******* >2018-10-02 08:29:53,039 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:53,065 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:29:53,066 p=1004 u=mistral | Tuesday 02 October 2018 08:29:53 -0400 (0:00:00.048) 0:01:05.799 ******* >2018-10-02 08:29:53,084 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:53,111 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:29:53,111 p=1004 u=mistral | Tuesday 02 October 2018 08:29:53 -0400 (0:00:00.045) 0:01:05.844 ******* >2018-10-02 08:29:53,128 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:53,157 p=1004 u=mistral | TASK [Render deployment file for ControllerHostsDeployment] ******************** >2018-10-02 08:29:53,157 p=1004 u=mistral | Tuesday 02 October 2018 08:29:53 -0400 (0:00:00.046) 0:01:05.891 ******* >2018-10-02 08:29:53,694 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "f6050d863beb625067fd02245dda118f04cb8820", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostsDeployment-79001ad4-6ed7-4598-9ca3-5387b61831ea", "gid": 0, "group": "root", "md5sum": "52f2d076345d50dbb546d9ad096dd68d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4430, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483393.22-53572598520340/source", "state": "file", "uid": 0} >2018-10-02 08:29:53,720 p=1004 u=mistral | TASK [Check if deployed file exists for ControllerHostsDeployment] ************* >2018-10-02 08:29:53,720 p=1004 u=mistral | Tuesday 02 October 2018 08:29:53 -0400 (0:00:00.562) 0:01:06.453 ******* >2018-10-02 08:29:53,908 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:53,933 p=1004 u=mistral | TASK [Check previous deployment rc for ControllerHostsDeployment] ************** >2018-10-02 08:29:53,933 p=1004 u=mistral | Tuesday 02 October 2018 08:29:53 -0400 (0:00:00.213) 0:01:06.667 ******* >2018-10-02 08:29:53,950 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:53,973 p=1004 u=mistral | TASK [Remove deployed file for ControllerHostsDeployment when previous deployment failed] *** >2018-10-02 08:29:53,973 p=1004 u=mistral | Tuesday 02 October 2018 08:29:53 -0400 (0:00:00.039) 0:01:06.707 ******* >2018-10-02 08:29:53,995 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:54,021 p=1004 u=mistral | TASK [Force remove deployed file for ControllerHostsDeployment] **************** >2018-10-02 08:29:54,021 p=1004 u=mistral | Tuesday 02 October 2018 08:29:54 -0400 (0:00:00.048) 0:01:06.755 ******* >2018-10-02 08:29:54,038 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:54,065 p=1004 u=mistral | TASK [Run deployment ControllerHostsDeployment] ******************************** >2018-10-02 08:29:54,065 p=1004 u=mistral | Tuesday 02 October 2018 08:29:54 -0400 (0:00:00.043) 0:01:06.799 ******* >2018-10-02 08:29:54,803 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/79001ad4-6ed7-4598-9ca3-5387b61831ea.notify.json)", "delta": "0:00:00.504429", "end": "2018-10-02 08:29:54.752105", "rc": 0, "start": "2018-10-02 08:29:54.247676", "stderr": "[2018-10-02 08:29:54,275] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/79001ad4-6ed7-4598-9ca3-5387b61831ea.json\n[2018-10-02 08:29:54,328] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-10-02 08:29:54,328] (heat-config) [DEBUG] [2018-10-02 08:29:54,297] (heat-config) [INFO] hosts=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf\n[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-vew7ehsmotvp-0-2r4nmfaqlb5j/1e022ae2-93c7-423c-9c4e-9ba144c41cb9\n[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:29:54,298] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/79001ad4-6ed7-4598-9ca3-5387b61831ea\n[2018-10-02 08:29:54,324] (heat-config) [INFO] \n[2018-10-02 08:29:54,324] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /controller-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-10-02 08:29:54,325] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/79001ad4-6ed7-4598-9ca3-5387b61831ea\n\n[2018-10-02 08:29:54,328] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:29:54,329] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/79001ad4-6ed7-4598-9ca3-5387b61831ea.json < /var/lib/heat-config/deployed/79001ad4-6ed7-4598-9ca3-5387b61831ea.notify.json\n[2018-10-02 08:29:54,745] (heat-config) [INFO] \n[2018-10-02 08:29:54,745] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:54,275] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/79001ad4-6ed7-4598-9ca3-5387b61831ea.json", "[2018-10-02 08:29:54,328] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-10-02 08:29:54,328] (heat-config) [DEBUG] [2018-10-02 08:29:54,297] (heat-config) [INFO] hosts=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", "[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-vew7ehsmotvp-0-2r4nmfaqlb5j/1e022ae2-93c7-423c-9c4e-9ba144c41cb9", "[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:29:54,298] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/79001ad4-6ed7-4598-9ca3-5387b61831ea", "[2018-10-02 08:29:54,324] (heat-config) [INFO] ", "[2018-10-02 08:29:54,324] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /controller-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-10-02 08:29:54,325] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/79001ad4-6ed7-4598-9ca3-5387b61831ea", "", "[2018-10-02 08:29:54,328] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:29:54,329] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/79001ad4-6ed7-4598-9ca3-5387b61831ea.json < /var/lib/heat-config/deployed/79001ad4-6ed7-4598-9ca3-5387b61831ea.notify.json", "[2018-10-02 08:29:54,745] (heat-config) [INFO] ", "[2018-10-02 08:29:54,745] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:29:54,850 p=1004 u=mistral | TASK [Output for ControllerHostsDeployment] ************************************ >2018-10-02 08:29:54,850 p=1004 u=mistral | Tuesday 02 October 2018 08:29:54 -0400 (0:00:00.784) 0:01:07.584 ******* >2018-10-02 08:29:54,937 p=1004 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:54,275] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/79001ad4-6ed7-4598-9ca3-5387b61831ea.json", > "[2018-10-02 08:29:54,328] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 08:29:54,328] (heat-config) [DEBUG] [2018-10-02 08:29:54,297] (heat-config) [INFO] hosts=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", > "[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-vew7ehsmotvp-0-2r4nmfaqlb5j/1e022ae2-93c7-423c-9c4e-9ba144c41cb9", > "[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:29:54,298] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:29:54,298] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/79001ad4-6ed7-4598-9ca3-5387b61831ea", > "[2018-10-02 08:29:54,324] (heat-config) [INFO] ", > "[2018-10-02 08:29:54,324] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-10-02 08:29:54,325] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/79001ad4-6ed7-4598-9ca3-5387b61831ea", > "", > "[2018-10-02 08:29:54,328] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:29:54,329] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/79001ad4-6ed7-4598-9ca3-5387b61831ea.json < /var/lib/heat-config/deployed/79001ad4-6ed7-4598-9ca3-5387b61831ea.notify.json", > "[2018-10-02 08:29:54,745] (heat-config) [INFO] ", > "[2018-10-02 08:29:54,745] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:29:54,983 p=1004 u=mistral | TASK [Check-mode for Run deployment ControllerHostsDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:29:54,983 p=1004 u=mistral | Tuesday 02 October 2018 08:29:54 -0400 (0:00:00.132) 0:01:07.716 ******* >2018-10-02 08:29:54,999 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:55,025 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:29:55,025 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.042) 0:01:07.759 ******* >2018-10-02 08:29:55,182 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "f5c59479-ca3a-4cdb-9cc7-855633358bd5"}, "changed": false} >2018-10-02 08:29:55,209 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:29:55,209 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.183) 0:01:07.943 ******* >2018-10-02 08:29:55,366 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 08:29:55,392 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:29:55,392 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.182) 0:01:08.125 ******* >2018-10-02 08:29:55,410 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:55,434 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:29:55,434 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.042) 0:01:08.168 ******* >2018-10-02 08:29:55,452 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:55,476 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:29:55,476 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.042) 0:01:08.210 ******* >2018-10-02 08:29:55,494 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:55,521 p=1004 u=mistral | TASK [Render deployment file for ControllerAllNodesDeployment for check-mode] *** >2018-10-02 08:29:55,522 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.045) 0:01:08.255 ******* >2018-10-02 08:29:55,540 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:55,566 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:29:55,566 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.044) 0:01:08.300 ******* >2018-10-02 08:29:55,585 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:55,609 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:29:55,609 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.042) 0:01:08.342 ******* >2018-10-02 08:29:55,632 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:55,661 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:55,662 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.052) 0:01:08.395 ******* >2018-10-02 08:29:55,685 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:55,712 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:55,712 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.050) 0:01:08.445 ******* >2018-10-02 08:29:55,734 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:55,758 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:29:55,759 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.046) 0:01:08.492 ******* >2018-10-02 08:29:55,776 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:55,799 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:29:55,800 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.040) 0:01:08.533 ******* >2018-10-02 08:29:55,817 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:55,843 p=1004 u=mistral | TASK [Render deployment file for ControllerAllNodesDeployment] ***************** >2018-10-02 08:29:55,843 p=1004 u=mistral | Tuesday 02 October 2018 08:29:55 -0400 (0:00:00.043) 0:01:08.576 ******* >2018-10-02 08:29:56,501 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "ac22752bb1b0258f5196da0b6248860c939fba79", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesDeployment-f5c59479-ca3a-4cdb-9cc7-855633358bd5", "gid": 0, "group": "root", "md5sum": "76f0ed40cc8ffd928d85bc5fd686fbb2", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19549, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483396.01-64503211984033/source", "state": "file", "uid": 0} >2018-10-02 08:29:56,539 p=1004 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesDeployment] ********** >2018-10-02 08:29:56,539 p=1004 u=mistral | Tuesday 02 October 2018 08:29:56 -0400 (0:00:00.696) 0:01:09.273 ******* >2018-10-02 08:29:56,751 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:56,779 p=1004 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesDeployment] *********** >2018-10-02 08:29:56,780 p=1004 u=mistral | Tuesday 02 October 2018 08:29:56 -0400 (0:00:00.240) 0:01:09.513 ******* >2018-10-02 08:29:56,799 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:56,826 p=1004 u=mistral | TASK [Remove deployed file for ControllerAllNodesDeployment when previous deployment failed] *** >2018-10-02 08:29:56,826 p=1004 u=mistral | Tuesday 02 October 2018 08:29:56 -0400 (0:00:00.046) 0:01:09.560 ******* >2018-10-02 08:29:56,848 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:56,876 p=1004 u=mistral | TASK [Force remove deployed file for ControllerAllNodesDeployment] ************* >2018-10-02 08:29:56,876 p=1004 u=mistral | Tuesday 02 October 2018 08:29:56 -0400 (0:00:00.050) 0:01:09.610 ******* >2018-10-02 08:29:56,896 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:56,922 p=1004 u=mistral | TASK [Run deployment ControllerAllNodesDeployment] ***************************** >2018-10-02 08:29:56,923 p=1004 u=mistral | Tuesday 02 October 2018 08:29:56 -0400 (0:00:00.046) 0:01:09.656 ******* >2018-10-02 08:29:57,680 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f5c59479-ca3a-4cdb-9cc7-855633358bd5.notify.json)", "delta": "0:00:00.546974", "end": "2018-10-02 08:29:57.660706", "rc": 0, "start": "2018-10-02 08:29:57.113732", "stderr": "[2018-10-02 08:29:57,142] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/f5c59479-ca3a-4cdb-9cc7-855633358bd5.json\n[2018-10-02 08:29:57,264] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:29:57,265] (heat-config) [DEBUG] \n[2018-10-02 08:29:57,265] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 08:29:57,265] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f5c59479-ca3a-4cdb-9cc7-855633358bd5.json < /var/lib/heat-config/deployed/f5c59479-ca3a-4cdb-9cc7-855633358bd5.notify.json\n[2018-10-02 08:29:57,653] (heat-config) [INFO] \n[2018-10-02 08:29:57,654] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:57,142] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/f5c59479-ca3a-4cdb-9cc7-855633358bd5.json", "[2018-10-02 08:29:57,264] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:29:57,265] (heat-config) [DEBUG] ", "[2018-10-02 08:29:57,265] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 08:29:57,265] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f5c59479-ca3a-4cdb-9cc7-855633358bd5.json < /var/lib/heat-config/deployed/f5c59479-ca3a-4cdb-9cc7-855633358bd5.notify.json", "[2018-10-02 08:29:57,653] (heat-config) [INFO] ", "[2018-10-02 08:29:57,654] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:29:57,706 p=1004 u=mistral | TASK [Output for ControllerAllNodesDeployment] ********************************* >2018-10-02 08:29:57,707 p=1004 u=mistral | Tuesday 02 October 2018 08:29:57 -0400 (0:00:00.784) 0:01:10.440 ******* >2018-10-02 08:29:57,766 p=1004 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:57,142] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/f5c59479-ca3a-4cdb-9cc7-855633358bd5.json", > "[2018-10-02 08:29:57,264] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:29:57,265] (heat-config) [DEBUG] ", > "[2018-10-02 08:29:57,265] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 08:29:57,265] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f5c59479-ca3a-4cdb-9cc7-855633358bd5.json < /var/lib/heat-config/deployed/f5c59479-ca3a-4cdb-9cc7-855633358bd5.notify.json", > "[2018-10-02 08:29:57,653] (heat-config) [INFO] ", > "[2018-10-02 08:29:57,654] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:29:57,796 p=1004 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:29:57,796 p=1004 u=mistral | Tuesday 02 October 2018 08:29:57 -0400 (0:00:00.089) 0:01:10.530 ******* >2018-10-02 08:29:57,812 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:57,847 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:29:57,847 p=1004 u=mistral | Tuesday 02 October 2018 08:29:57 -0400 (0:00:00.050) 0:01:10.580 ******* >2018-10-02 08:29:57,918 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "05a96074-2c5f-4cb5-9808-c1e399587e16"}, "changed": false} >2018-10-02 08:29:57,944 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:29:57,944 p=1004 u=mistral | Tuesday 02 October 2018 08:29:57 -0400 (0:00:00.097) 0:01:10.677 ******* >2018-10-02 08:29:58,011 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:29:58,038 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:29:58,038 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.094) 0:01:10.772 ******* >2018-10-02 08:29:58,057 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:58,081 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:29:58,081 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.043) 0:01:10.815 ******* >2018-10-02 08:29:58,101 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:58,126 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:29:58,126 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.044) 0:01:10.859 ******* >2018-10-02 08:29:58,145 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:58,170 p=1004 u=mistral | TASK [Render deployment file for ControllerAllNodesValidationDeployment for check-mode] *** >2018-10-02 08:29:58,170 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.044) 0:01:10.904 ******* >2018-10-02 08:29:58,187 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:58,211 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:29:58,211 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.041) 0:01:10.945 ******* >2018-10-02 08:29:58,227 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:58,251 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:29:58,251 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.039) 0:01:10.985 ******* >2018-10-02 08:29:58,268 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:58,290 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:58,290 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.039) 0:01:11.024 ******* >2018-10-02 08:29:58,314 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:58,340 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:29:58,341 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.050) 0:01:11.074 ******* >2018-10-02 08:29:58,363 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:58,385 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:29:58,386 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.045) 0:01:11.119 ******* >2018-10-02 08:29:58,403 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:58,426 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:29:58,426 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.040) 0:01:11.159 ******* >2018-10-02 08:29:58,443 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:29:58,468 p=1004 u=mistral | TASK [Render deployment file for ControllerAllNodesValidationDeployment] ******* >2018-10-02 08:29:58,468 p=1004 u=mistral | Tuesday 02 October 2018 08:29:58 -0400 (0:00:00.041) 0:01:11.201 ******* >2018-10-02 08:29:58,997 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "bfbdcf9931ddbd05d63fb2f0caabeb17759e9867", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesValidationDeployment-05a96074-2c5f-4cb5-9808-c1e399587e16", "gid": 0, "group": "root", "md5sum": "2fd987c93b2691a29a0bd61d0ad48a03", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4941, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483398.53-76742119926527/source", "state": "file", "uid": 0} >2018-10-02 08:29:59,025 p=1004 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesValidationDeployment] *** >2018-10-02 08:29:59,026 p=1004 u=mistral | Tuesday 02 October 2018 08:29:59 -0400 (0:00:00.557) 0:01:11.759 ******* >2018-10-02 08:29:59,229 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:29:59,257 p=1004 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesValidationDeployment] *** >2018-10-02 08:29:59,258 p=1004 u=mistral | Tuesday 02 October 2018 08:29:59 -0400 (0:00:00.232) 0:01:11.991 ******* >2018-10-02 08:29:59,280 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:59,311 p=1004 u=mistral | TASK [Remove deployed file for ControllerAllNodesValidationDeployment when previous deployment failed] *** >2018-10-02 08:29:59,311 p=1004 u=mistral | Tuesday 02 October 2018 08:29:59 -0400 (0:00:00.053) 0:01:12.044 ******* >2018-10-02 08:29:59,334 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:59,363 p=1004 u=mistral | TASK [Force remove deployed file for ControllerAllNodesValidationDeployment] *** >2018-10-02 08:29:59,363 p=1004 u=mistral | Tuesday 02 October 2018 08:29:59 -0400 (0:00:00.052) 0:01:12.096 ******* >2018-10-02 08:29:59,382 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:29:59,410 p=1004 u=mistral | TASK [Run deployment ControllerAllNodesValidationDeployment] ******************* >2018-10-02 08:29:59,410 p=1004 u=mistral | Tuesday 02 October 2018 08:29:59 -0400 (0:00:00.046) 0:01:12.143 ******* >2018-10-02 08:30:00,891 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/05a96074-2c5f-4cb5-9808-c1e399587e16.notify.json)", "delta": "0:00:01.276601", "end": "2018-10-02 08:30:00.869559", "rc": 0, "start": "2018-10-02 08:29:59.592958", "stderr": "[2018-10-02 08:29:59,620] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/05a96074-2c5f-4cb5-9808-c1e399587e16.json\n[2018-10-02 08:30:00,441] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.20 for local network 172.17.1.0/24.\\nPing to 172.17.1.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.19 for local network 172.17.2.0/24.\\nPing to 172.17.2.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\\nPing to 172.17.3.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.31 for local network 172.17.4.0/24.\\nPing to 172.17.4.31 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\\nPing to 192.168.24.10 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:00,442] (heat-config) [DEBUG] [2018-10-02 08:29:59,643] (heat-config) [INFO] ping_test_ips=172.17.3.15 172.17.4.31 172.17.1.20 172.17.2.19 10.0.0.104 192.168.24.10\n[2018-10-02 08:29:59,643] (heat-config) [INFO] validate_fqdn=False\n[2018-10-02 08:29:59,643] (heat-config) [INFO] validate_ntp=True\n[2018-10-02 08:29:59,643] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf\n[2018-10-02 08:29:59,643] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:29:59,643] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-5su6cjsmfy3a-0-uei7bkd6uxa2/3981bb83-8c15-43b2-a8ca-cd9ee15a6d3f\n[2018-10-02 08:29:59,644] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:29:59,644] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:29:59,644] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/05a96074-2c5f-4cb5-9808-c1e399587e16\n[2018-10-02 08:30:00,437] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\nPing to 10.0.0.104 succeeded.\nSUCCESS\nTrying to ping 172.17.1.20 for local network 172.17.1.0/24.\nPing to 172.17.1.20 succeeded.\nSUCCESS\nTrying to ping 172.17.2.19 for local network 172.17.2.0/24.\nPing to 172.17.2.19 succeeded.\nSUCCESS\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\nPing to 172.17.3.15 succeeded.\nSUCCESS\nTrying to ping 172.17.4.31 for local network 172.17.4.0/24.\nPing to 172.17.4.31 succeeded.\nSUCCESS\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\nPing to 192.168.24.10 succeeded.\nSUCCESS\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-10-02 08:30:00,438] (heat-config) [DEBUG] \n[2018-10-02 08:30:00,438] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/05a96074-2c5f-4cb5-9808-c1e399587e16\n\n[2018-10-02 08:30:00,442] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:30:00,442] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/05a96074-2c5f-4cb5-9808-c1e399587e16.json < /var/lib/heat-config/deployed/05a96074-2c5f-4cb5-9808-c1e399587e16.notify.json\n[2018-10-02 08:30:00,862] (heat-config) [INFO] \n[2018-10-02 08:30:00,863] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:29:59,620] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/05a96074-2c5f-4cb5-9808-c1e399587e16.json", "[2018-10-02 08:30:00,441] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.20 for local network 172.17.1.0/24.\\nPing to 172.17.1.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.19 for local network 172.17.2.0/24.\\nPing to 172.17.2.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\\nPing to 172.17.3.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.31 for local network 172.17.4.0/24.\\nPing to 172.17.4.31 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\\nPing to 192.168.24.10 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:00,442] (heat-config) [DEBUG] [2018-10-02 08:29:59,643] (heat-config) [INFO] ping_test_ips=172.17.3.15 172.17.4.31 172.17.1.20 172.17.2.19 10.0.0.104 192.168.24.10", "[2018-10-02 08:29:59,643] (heat-config) [INFO] validate_fqdn=False", "[2018-10-02 08:29:59,643] (heat-config) [INFO] validate_ntp=True", "[2018-10-02 08:29:59,643] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", "[2018-10-02 08:29:59,643] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:29:59,643] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-5su6cjsmfy3a-0-uei7bkd6uxa2/3981bb83-8c15-43b2-a8ca-cd9ee15a6d3f", "[2018-10-02 08:29:59,644] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:29:59,644] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:29:59,644] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/05a96074-2c5f-4cb5-9808-c1e399587e16", "[2018-10-02 08:30:00,437] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", "Ping to 10.0.0.104 succeeded.", "SUCCESS", "Trying to ping 172.17.1.20 for local network 172.17.1.0/24.", "Ping to 172.17.1.20 succeeded.", "SUCCESS", "Trying to ping 172.17.2.19 for local network 172.17.2.0/24.", "Ping to 172.17.2.19 succeeded.", "SUCCESS", "Trying to ping 172.17.3.15 for local network 172.17.3.0/24.", "Ping to 172.17.3.15 succeeded.", "SUCCESS", "Trying to ping 172.17.4.31 for local network 172.17.4.0/24.", "Ping to 172.17.4.31 succeeded.", "SUCCESS", "Trying to ping 192.168.24.10 for local network 192.168.24.0/24.", "Ping to 192.168.24.10 succeeded.", "SUCCESS", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-10-02 08:30:00,438] (heat-config) [DEBUG] ", "[2018-10-02 08:30:00,438] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/05a96074-2c5f-4cb5-9808-c1e399587e16", "", "[2018-10-02 08:30:00,442] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:30:00,442] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/05a96074-2c5f-4cb5-9808-c1e399587e16.json < /var/lib/heat-config/deployed/05a96074-2c5f-4cb5-9808-c1e399587e16.notify.json", "[2018-10-02 08:30:00,862] (heat-config) [INFO] ", "[2018-10-02 08:30:00,863] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:00,921 p=1004 u=mistral | TASK [Output for ControllerAllNodesValidationDeployment] *********************** >2018-10-02 08:30:00,922 p=1004 u=mistral | Tuesday 02 October 2018 08:30:00 -0400 (0:00:01.511) 0:01:13.655 ******* >2018-10-02 08:30:00,983 p=1004 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:29:59,620] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/05a96074-2c5f-4cb5-9808-c1e399587e16.json", > "[2018-10-02 08:30:00,441] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.20 for local network 172.17.1.0/24.\\nPing to 172.17.1.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.19 for local network 172.17.2.0/24.\\nPing to 172.17.2.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\\nPing to 172.17.3.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.31 for local network 172.17.4.0/24.\\nPing to 172.17.4.31 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\\nPing to 192.168.24.10 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:00,442] (heat-config) [DEBUG] [2018-10-02 08:29:59,643] (heat-config) [INFO] ping_test_ips=172.17.3.15 172.17.4.31 172.17.1.20 172.17.2.19 10.0.0.104 192.168.24.10", > "[2018-10-02 08:29:59,643] (heat-config) [INFO] validate_fqdn=False", > "[2018-10-02 08:29:59,643] (heat-config) [INFO] validate_ntp=True", > "[2018-10-02 08:29:59,643] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", > "[2018-10-02 08:29:59,643] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:29:59,643] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-5su6cjsmfy3a-0-uei7bkd6uxa2/3981bb83-8c15-43b2-a8ca-cd9ee15a6d3f", > "[2018-10-02 08:29:59,644] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:29:59,644] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:29:59,644] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/05a96074-2c5f-4cb5-9808-c1e399587e16", > "[2018-10-02 08:30:00,437] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", > "Ping to 10.0.0.104 succeeded.", > "SUCCESS", > "Trying to ping 172.17.1.20 for local network 172.17.1.0/24.", > "Ping to 172.17.1.20 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.19 for local network 172.17.2.0/24.", > "Ping to 172.17.2.19 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.15 for local network 172.17.3.0/24.", > "Ping to 172.17.3.15 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.31 for local network 172.17.4.0/24.", > "Ping to 172.17.4.31 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.10 for local network 192.168.24.0/24.", > "Ping to 192.168.24.10 succeeded.", > "SUCCESS", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-10-02 08:30:00,438] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:00,438] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/05a96074-2c5f-4cb5-9808-c1e399587e16", > "", > "[2018-10-02 08:30:00,442] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:30:00,442] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/05a96074-2c5f-4cb5-9808-c1e399587e16.json < /var/lib/heat-config/deployed/05a96074-2c5f-4cb5-9808-c1e399587e16.notify.json", > "[2018-10-02 08:30:00,862] (heat-config) [INFO] ", > "[2018-10-02 08:30:00,863] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:01,011 p=1004 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesValidationDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:01,011 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.089) 0:01:13.744 ******* >2018-10-02 08:30:01,026 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:01,052 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:01,052 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.040) 0:01:13.785 ******* >2018-10-02 08:30:01,130 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "5269437a-ef3f-4810-bbd1-ab9dbc687b65"}, "changed": false} >2018-10-02 08:30:01,156 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:01,156 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.103) 0:01:13.889 ******* >2018-10-02 08:30:01,230 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "ansible"}, "changed": false} >2018-10-02 08:30:01,257 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:01,257 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.100) 0:01:13.990 ******* >2018-10-02 08:30:01,276 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:01,301 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:01,301 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.044) 0:01:14.035 ******* >2018-10-02 08:30:01,317 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:01,343 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:01,343 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.041) 0:01:14.077 ******* >2018-10-02 08:30:01,360 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:01,385 p=1004 u=mistral | TASK [Render deployment file for ControllerHostPrepDeployment for check-mode] *** >2018-10-02 08:30:01,386 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.042) 0:01:14.119 ******* >2018-10-02 08:30:01,402 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:01,426 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:01,426 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.040) 0:01:14.159 ******* >2018-10-02 08:30:01,441 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:01,468 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:01,469 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.042) 0:01:14.202 ******* >2018-10-02 08:30:01,488 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:01,513 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:01,513 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.044) 0:01:14.247 ******* >2018-10-02 08:30:01,535 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:01,561 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:01,562 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.048) 0:01:14.295 ******* >2018-10-02 08:30:01,581 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:30:01,610 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:01,610 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.048) 0:01:14.344 ******* >2018-10-02 08:30:01,630 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:01,655 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:01,655 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.044) 0:01:14.389 ******* >2018-10-02 08:30:01,674 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:30:01,702 p=1004 u=mistral | TASK [Render deployment file for ControllerHostPrepDeployment] ***************** >2018-10-02 08:30:01,702 p=1004 u=mistral | Tuesday 02 October 2018 08:30:01 -0400 (0:00:00.046) 0:01:14.435 ******* >2018-10-02 08:30:02,329 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "fd980ff00b2b17d2600d4f7d46e96644eb8e0ca3", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostPrepDeployment-5269437a-ef3f-4810-bbd1-ab9dbc687b65", "gid": 0, "group": "root", "md5sum": "68870250c2b32f96888dd49127faf28b", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21378, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483401.85-157616426031449/source", "state": "file", "uid": 0} >2018-10-02 08:30:02,357 p=1004 u=mistral | TASK [Check if deployed file exists for ControllerHostPrepDeployment] ********** >2018-10-02 08:30:02,358 p=1004 u=mistral | Tuesday 02 October 2018 08:30:02 -0400 (0:00:00.655) 0:01:15.091 ******* >2018-10-02 08:30:02,627 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:02,703 p=1004 u=mistral | TASK [Check previous deployment rc for ControllerHostPrepDeployment] *********** >2018-10-02 08:30:02,703 p=1004 u=mistral | Tuesday 02 October 2018 08:30:02 -0400 (0:00:00.345) 0:01:15.436 ******* >2018-10-02 08:30:02,724 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:02,751 p=1004 u=mistral | TASK [Remove deployed file for ControllerHostPrepDeployment when previous deployment failed] *** >2018-10-02 08:30:02,751 p=1004 u=mistral | Tuesday 02 October 2018 08:30:02 -0400 (0:00:00.047) 0:01:15.484 ******* >2018-10-02 08:30:02,772 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:02,800 p=1004 u=mistral | TASK [Force remove deployed file for ControllerHostPrepDeployment] ************* >2018-10-02 08:30:02,800 p=1004 u=mistral | Tuesday 02 October 2018 08:30:02 -0400 (0:00:00.048) 0:01:15.533 ******* >2018-10-02 08:30:02,817 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:02,843 p=1004 u=mistral | TASK [Run deployment ControllerHostPrepDeployment] ***************************** >2018-10-02 08:30:02,843 p=1004 u=mistral | Tuesday 02 October 2018 08:30:02 -0400 (0:00:00.043) 0:01:15.577 ******* >2018-10-02 08:30:09,613 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/5269437a-ef3f-4810-bbd1-ab9dbc687b65.notify.json)", "delta": "0:00:06.554405", "end": "2018-10-02 08:30:09.586145", "rc": 0, "start": "2018-10-02 08:30:03.031740", "stderr": "[2018-10-02 08:30:03,060] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/5269437a-ef3f-4810-bbd1-ab9dbc687b65.json\n[2018-10-02 08:30:09,172] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:09,173] (heat-config) [DEBUG] [2018-10-02 08:30:03,084] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/5269437a-ef3f-4810-bbd1-ab9dbc687b65_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/5269437a-ef3f-4810-bbd1-ab9dbc687b65_variables.json\n[2018-10-02 08:30:09,167] (heat-config) [INFO] Return code 0\n[2018-10-02 08:30:09,167] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-10-02 08:30:09,167] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/5269437a-ef3f-4810-bbd1-ab9dbc687b65_playbook.yaml\n\n[2018-10-02 08:30:09,173] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-10-02 08:30:09,173] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5269437a-ef3f-4810-bbd1-ab9dbc687b65.json < /var/lib/heat-config/deployed/5269437a-ef3f-4810-bbd1-ab9dbc687b65.notify.json\n[2018-10-02 08:30:09,578] (heat-config) [INFO] \n[2018-10-02 08:30:09,578] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:03,060] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/5269437a-ef3f-4810-bbd1-ab9dbc687b65.json", "[2018-10-02 08:30:09,172] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:09,173] (heat-config) [DEBUG] [2018-10-02 08:30:03,084] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/5269437a-ef3f-4810-bbd1-ab9dbc687b65_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/5269437a-ef3f-4810-bbd1-ab9dbc687b65_variables.json", "[2018-10-02 08:30:09,167] (heat-config) [INFO] Return code 0", "[2018-10-02 08:30:09,167] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-10-02 08:30:09,167] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/5269437a-ef3f-4810-bbd1-ab9dbc687b65_playbook.yaml", "", "[2018-10-02 08:30:09,173] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-10-02 08:30:09,173] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5269437a-ef3f-4810-bbd1-ab9dbc687b65.json < /var/lib/heat-config/deployed/5269437a-ef3f-4810-bbd1-ab9dbc687b65.notify.json", "[2018-10-02 08:30:09,578] (heat-config) [INFO] ", "[2018-10-02 08:30:09,578] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:09,641 p=1004 u=mistral | TASK [Output for ControllerHostPrepDeployment] ********************************* >2018-10-02 08:30:09,642 p=1004 u=mistral | Tuesday 02 October 2018 08:30:09 -0400 (0:00:06.798) 0:01:22.375 ******* >2018-10-02 08:30:09,697 p=1004 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:03,060] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/5269437a-ef3f-4810-bbd1-ab9dbc687b65.json", > "[2018-10-02 08:30:09,172] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:09,173] (heat-config) [DEBUG] [2018-10-02 08:30:03,084] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/5269437a-ef3f-4810-bbd1-ab9dbc687b65_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/5269437a-ef3f-4810-bbd1-ab9dbc687b65_variables.json", > "[2018-10-02 08:30:09,167] (heat-config) [INFO] Return code 0", > "[2018-10-02 08:30:09,167] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-10-02 08:30:09,167] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/5269437a-ef3f-4810-bbd1-ab9dbc687b65_playbook.yaml", > "", > "[2018-10-02 08:30:09,173] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-10-02 08:30:09,173] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5269437a-ef3f-4810-bbd1-ab9dbc687b65.json < /var/lib/heat-config/deployed/5269437a-ef3f-4810-bbd1-ab9dbc687b65.notify.json", > "[2018-10-02 08:30:09,578] (heat-config) [INFO] ", > "[2018-10-02 08:30:09,578] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:09,724 p=1004 u=mistral | TASK [Check-mode for Run deployment ControllerHostPrepDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:09,724 p=1004 u=mistral | Tuesday 02 October 2018 08:30:09 -0400 (0:00:00.082) 0:01:22.458 ******* >2018-10-02 08:30:09,739 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:09,764 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:09,764 p=1004 u=mistral | Tuesday 02 October 2018 08:30:09 -0400 (0:00:00.039) 0:01:22.497 ******* >2018-10-02 08:30:09,825 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "38b3467e-baf6-4cf6-a2ae-f1c2a0801670"}, "changed": false} >2018-10-02 08:30:09,849 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:09,849 p=1004 u=mistral | Tuesday 02 October 2018 08:30:09 -0400 (0:00:00.084) 0:01:22.582 ******* >2018-10-02 08:30:09,909 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:30:09,933 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:09,933 p=1004 u=mistral | Tuesday 02 October 2018 08:30:09 -0400 (0:00:00.083) 0:01:22.666 ******* >2018-10-02 08:30:09,950 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:09,973 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:09,973 p=1004 u=mistral | Tuesday 02 October 2018 08:30:09 -0400 (0:00:00.040) 0:01:22.707 ******* >2018-10-02 08:30:09,989 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:10,013 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:10,013 p=1004 u=mistral | Tuesday 02 October 2018 08:30:10 -0400 (0:00:00.039) 0:01:22.746 ******* >2018-10-02 08:30:10,031 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:10,059 p=1004 u=mistral | TASK [Render deployment file for ControllerArtifactsDeploy for check-mode] ***** >2018-10-02 08:30:10,059 p=1004 u=mistral | Tuesday 02 October 2018 08:30:10 -0400 (0:00:00.046) 0:01:22.793 ******* >2018-10-02 08:30:10,078 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:10,103 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:10,103 p=1004 u=mistral | Tuesday 02 October 2018 08:30:10 -0400 (0:00:00.043) 0:01:22.836 ******* >2018-10-02 08:30:10,121 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:10,146 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:10,146 p=1004 u=mistral | Tuesday 02 October 2018 08:30:10 -0400 (0:00:00.043) 0:01:22.879 ******* >2018-10-02 08:30:10,165 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:10,189 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:10,189 p=1004 u=mistral | Tuesday 02 October 2018 08:30:10 -0400 (0:00:00.043) 0:01:22.922 ******* >2018-10-02 08:30:10,210 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:10,236 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:10,236 p=1004 u=mistral | Tuesday 02 October 2018 08:30:10 -0400 (0:00:00.046) 0:01:22.969 ******* >2018-10-02 08:30:10,256 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:30:10,281 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:10,282 p=1004 u=mistral | Tuesday 02 October 2018 08:30:10 -0400 (0:00:00.045) 0:01:23.015 ******* >2018-10-02 08:30:10,300 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:10,325 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:10,325 p=1004 u=mistral | Tuesday 02 October 2018 08:30:10 -0400 (0:00:00.043) 0:01:23.058 ******* >2018-10-02 08:30:10,342 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:30:10,368 p=1004 u=mistral | TASK [Render deployment file for ControllerArtifactsDeploy] ******************** >2018-10-02 08:30:10,368 p=1004 u=mistral | Tuesday 02 October 2018 08:30:10 -0400 (0:00:00.042) 0:01:23.101 ******* >2018-10-02 08:30:10,898 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "193c8e1b49ec9ca9b316f885a2d4d05b2ce3b125", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerArtifactsDeploy-38b3467e-baf6-4cf6-a2ae-f1c2a0801670", "gid": 0, "group": "root", "md5sum": "cb24410b7fdea0c92ec036fd625b5303", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2021, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483410.43-139779141184327/source", "state": "file", "uid": 0} >2018-10-02 08:30:10,927 p=1004 u=mistral | TASK [Check if deployed file exists for ControllerArtifactsDeploy] ************* >2018-10-02 08:30:10,927 p=1004 u=mistral | Tuesday 02 October 2018 08:30:10 -0400 (0:00:00.559) 0:01:23.661 ******* >2018-10-02 08:30:11,117 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:11,145 p=1004 u=mistral | TASK [Check previous deployment rc for ControllerArtifactsDeploy] ************** >2018-10-02 08:30:11,146 p=1004 u=mistral | Tuesday 02 October 2018 08:30:11 -0400 (0:00:00.218) 0:01:23.879 ******* >2018-10-02 08:30:11,165 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:11,191 p=1004 u=mistral | TASK [Remove deployed file for ControllerArtifactsDeploy when previous deployment failed] *** >2018-10-02 08:30:11,191 p=1004 u=mistral | Tuesday 02 October 2018 08:30:11 -0400 (0:00:00.045) 0:01:23.925 ******* >2018-10-02 08:30:11,212 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:11,239 p=1004 u=mistral | TASK [Force remove deployed file for ControllerArtifactsDeploy] **************** >2018-10-02 08:30:11,239 p=1004 u=mistral | Tuesday 02 October 2018 08:30:11 -0400 (0:00:00.047) 0:01:23.972 ******* >2018-10-02 08:30:11,257 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:11,281 p=1004 u=mistral | TASK [Run deployment ControllerArtifactsDeploy] ******************************** >2018-10-02 08:30:11,281 p=1004 u=mistral | Tuesday 02 October 2018 08:30:11 -0400 (0:00:00.041) 0:01:24.014 ******* >2018-10-02 08:30:11,949 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/38b3467e-baf6-4cf6-a2ae-f1c2a0801670.notify.json)", "delta": "0:00:00.458984", "end": "2018-10-02 08:30:11.925489", "rc": 0, "start": "2018-10-02 08:30:11.466505", "stderr": "[2018-10-02 08:30:11,495] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/38b3467e-baf6-4cf6-a2ae-f1c2a0801670.json\n[2018-10-02 08:30:11,529] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:11,529] (heat-config) [DEBUG] [2018-10-02 08:30:11,518] (heat-config) [INFO] artifact_urls=\n[2018-10-02 08:30:11,518] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf\n[2018-10-02 08:30:11,518] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:30:11,518] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-neoosyt67g2y-ControllerArtifactsDeploy-raqrflt6p6v2-0-7rzjt4bfvlhe/75a4ac16-f255-4379-8eb8-06af072662c8\n[2018-10-02 08:30:11,519] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:30:11,519] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:30:11,519] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/38b3467e-baf6-4cf6-a2ae-f1c2a0801670\n[2018-10-02 08:30:11,525] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-10-02 08:30:11,525] (heat-config) [DEBUG] \n[2018-10-02 08:30:11,525] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/38b3467e-baf6-4cf6-a2ae-f1c2a0801670\n\n[2018-10-02 08:30:11,529] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:30:11,529] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/38b3467e-baf6-4cf6-a2ae-f1c2a0801670.json < /var/lib/heat-config/deployed/38b3467e-baf6-4cf6-a2ae-f1c2a0801670.notify.json\n[2018-10-02 08:30:11,918] (heat-config) [INFO] \n[2018-10-02 08:30:11,919] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:11,495] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/38b3467e-baf6-4cf6-a2ae-f1c2a0801670.json", "[2018-10-02 08:30:11,529] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:11,529] (heat-config) [DEBUG] [2018-10-02 08:30:11,518] (heat-config) [INFO] artifact_urls=", "[2018-10-02 08:30:11,518] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", "[2018-10-02 08:30:11,518] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:30:11,518] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-neoosyt67g2y-ControllerArtifactsDeploy-raqrflt6p6v2-0-7rzjt4bfvlhe/75a4ac16-f255-4379-8eb8-06af072662c8", "[2018-10-02 08:30:11,519] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:30:11,519] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:30:11,519] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/38b3467e-baf6-4cf6-a2ae-f1c2a0801670", "[2018-10-02 08:30:11,525] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-10-02 08:30:11,525] (heat-config) [DEBUG] ", "[2018-10-02 08:30:11,525] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/38b3467e-baf6-4cf6-a2ae-f1c2a0801670", "", "[2018-10-02 08:30:11,529] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:30:11,529] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/38b3467e-baf6-4cf6-a2ae-f1c2a0801670.json < /var/lib/heat-config/deployed/38b3467e-baf6-4cf6-a2ae-f1c2a0801670.notify.json", "[2018-10-02 08:30:11,918] (heat-config) [INFO] ", "[2018-10-02 08:30:11,919] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:11,980 p=1004 u=mistral | TASK [Output for ControllerArtifactsDeploy] ************************************ >2018-10-02 08:30:11,980 p=1004 u=mistral | Tuesday 02 October 2018 08:30:11 -0400 (0:00:00.699) 0:01:24.714 ******* >2018-10-02 08:30:12,033 p=1004 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:11,495] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/38b3467e-baf6-4cf6-a2ae-f1c2a0801670.json", > "[2018-10-02 08:30:11,529] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:11,529] (heat-config) [DEBUG] [2018-10-02 08:30:11,518] (heat-config) [INFO] artifact_urls=", > "[2018-10-02 08:30:11,518] (heat-config) [INFO] deploy_server_id=8765325e-e8b6-4b1f-87f8-a3212b8a3bbf", > "[2018-10-02 08:30:11,518] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:30:11,518] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-neoosyt67g2y-ControllerArtifactsDeploy-raqrflt6p6v2-0-7rzjt4bfvlhe/75a4ac16-f255-4379-8eb8-06af072662c8", > "[2018-10-02 08:30:11,519] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:30:11,519] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:30:11,519] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/38b3467e-baf6-4cf6-a2ae-f1c2a0801670", > "[2018-10-02 08:30:11,525] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-10-02 08:30:11,525] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:11,525] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/38b3467e-baf6-4cf6-a2ae-f1c2a0801670", > "", > "[2018-10-02 08:30:11,529] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:30:11,529] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/38b3467e-baf6-4cf6-a2ae-f1c2a0801670.json < /var/lib/heat-config/deployed/38b3467e-baf6-4cf6-a2ae-f1c2a0801670.notify.json", > "[2018-10-02 08:30:11,918] (heat-config) [INFO] ", > "[2018-10-02 08:30:11,919] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:12,059 p=1004 u=mistral | TASK [Check-mode for Run deployment ControllerArtifactsDeploy (changed status indicates deployment would run)] *** >2018-10-02 08:30:12,059 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.078) 0:01:24.792 ******* >2018-10-02 08:30:12,074 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:12,095 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:12,095 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.036) 0:01:24.828 ******* >2018-10-02 08:30:12,150 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "f4a9b025-655a-427a-a7c2-347998d6a8e2"}, "changed": false} >2018-10-02 08:30:12,171 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:12,171 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.076) 0:01:24.905 ******* >2018-10-02 08:30:12,225 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:30:12,245 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:12,245 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.073) 0:01:24.978 ******* >2018-10-02 08:30:12,262 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:12,281 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:12,282 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.036) 0:01:25.015 ******* >2018-10-02 08:30:12,300 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:12,319 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:12,319 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.037) 0:01:25.052 ******* >2018-10-02 08:30:12,336 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:12,356 p=1004 u=mistral | TASK [Render deployment file for CephStorageUpgradeInitDeployment for check-mode] *** >2018-10-02 08:30:12,356 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.037) 0:01:25.090 ******* >2018-10-02 08:30:12,374 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:12,393 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:12,393 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.036) 0:01:25.126 ******* >2018-10-02 08:30:12,416 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:12,441 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:12,441 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.048) 0:01:25.174 ******* >2018-10-02 08:30:12,460 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:12,480 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:12,481 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.039) 0:01:25.214 ******* >2018-10-02 08:30:12,501 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:12,521 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:12,521 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.040) 0:01:25.254 ******* >2018-10-02 08:30:12,542 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:12,564 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:12,564 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.042) 0:01:25.297 ******* >2018-10-02 08:30:12,583 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:12,604 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:12,605 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.040) 0:01:25.338 ******* >2018-10-02 08:30:12,622 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:12,645 p=1004 u=mistral | TASK [Render deployment file for CephStorageUpgradeInitDeployment] ************* >2018-10-02 08:30:12,645 p=1004 u=mistral | Tuesday 02 October 2018 08:30:12 -0400 (0:00:00.040) 0:01:25.379 ******* >2018-10-02 08:30:13,211 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "82c3b478802852d39386c11228185320e49b656d", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageUpgradeInitDeployment-f4a9b025-655a-427a-a7c2-347998d6a8e2", "gid": 0, "group": "root", "md5sum": "836daa0749c4cf327c0b1783e2d11a51", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1186, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483412.71-115213464115503/source", "state": "file", "uid": 0} >2018-10-02 08:30:13,234 p=1004 u=mistral | TASK [Check if deployed file exists for CephStorageUpgradeInitDeployment] ****** >2018-10-02 08:30:13,235 p=1004 u=mistral | Tuesday 02 October 2018 08:30:13 -0400 (0:00:00.589) 0:01:25.968 ******* >2018-10-02 08:30:13,419 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:13,442 p=1004 u=mistral | TASK [Check previous deployment rc for CephStorageUpgradeInitDeployment] ******* >2018-10-02 08:30:13,443 p=1004 u=mistral | Tuesday 02 October 2018 08:30:13 -0400 (0:00:00.207) 0:01:26.176 ******* >2018-10-02 08:30:13,462 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:13,487 p=1004 u=mistral | TASK [Remove deployed file for CephStorageUpgradeInitDeployment when previous deployment failed] *** >2018-10-02 08:30:13,487 p=1004 u=mistral | Tuesday 02 October 2018 08:30:13 -0400 (0:00:00.044) 0:01:26.221 ******* >2018-10-02 08:30:13,512 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:13,535 p=1004 u=mistral | TASK [Force remove deployed file for CephStorageUpgradeInitDeployment] ********* >2018-10-02 08:30:13,535 p=1004 u=mistral | Tuesday 02 October 2018 08:30:13 -0400 (0:00:00.047) 0:01:26.268 ******* >2018-10-02 08:30:13,554 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:13,578 p=1004 u=mistral | TASK [Run deployment CephStorageUpgradeInitDeployment] ************************* >2018-10-02 08:30:13,578 p=1004 u=mistral | Tuesday 02 October 2018 08:30:13 -0400 (0:00:00.042) 0:01:26.311 ******* >2018-10-02 08:30:14,253 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f4a9b025-655a-427a-a7c2-347998d6a8e2.notify.json)", "delta": "0:00:00.462075", "end": "2018-10-02 08:30:14.232565", "rc": 0, "start": "2018-10-02 08:30:13.770490", "stderr": "[2018-10-02 08:30:13,799] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f4a9b025-655a-427a-a7c2-347998d6a8e2.json\n[2018-10-02 08:30:13,827] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:13,828] (heat-config) [DEBUG] [2018-10-02 08:30:13,820] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866\n[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-53btivfojecp-0-r67qhgbpx2gg-CephStorageUpgradeInitDeployment-2lzdwfq2tney/06ff8c40-4da9-46e6-b48e-dfe40316d88f\n[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:30:13,821] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f4a9b025-655a-427a-a7c2-347998d6a8e2\n[2018-10-02 08:30:13,824] (heat-config) [INFO] \n[2018-10-02 08:30:13,824] (heat-config) [DEBUG] \n[2018-10-02 08:30:13,824] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f4a9b025-655a-427a-a7c2-347998d6a8e2\n\n[2018-10-02 08:30:13,828] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:30:13,828] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f4a9b025-655a-427a-a7c2-347998d6a8e2.json < /var/lib/heat-config/deployed/f4a9b025-655a-427a-a7c2-347998d6a8e2.notify.json\n[2018-10-02 08:30:14,225] (heat-config) [INFO] \n[2018-10-02 08:30:14,226] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:13,799] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f4a9b025-655a-427a-a7c2-347998d6a8e2.json", "[2018-10-02 08:30:13,827] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:13,828] (heat-config) [DEBUG] [2018-10-02 08:30:13,820] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", "[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-53btivfojecp-0-r67qhgbpx2gg-CephStorageUpgradeInitDeployment-2lzdwfq2tney/06ff8c40-4da9-46e6-b48e-dfe40316d88f", "[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:30:13,821] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f4a9b025-655a-427a-a7c2-347998d6a8e2", "[2018-10-02 08:30:13,824] (heat-config) [INFO] ", "[2018-10-02 08:30:13,824] (heat-config) [DEBUG] ", "[2018-10-02 08:30:13,824] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f4a9b025-655a-427a-a7c2-347998d6a8e2", "", "[2018-10-02 08:30:13,828] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:30:13,828] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f4a9b025-655a-427a-a7c2-347998d6a8e2.json < /var/lib/heat-config/deployed/f4a9b025-655a-427a-a7c2-347998d6a8e2.notify.json", "[2018-10-02 08:30:14,225] (heat-config) [INFO] ", "[2018-10-02 08:30:14,226] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:14,277 p=1004 u=mistral | TASK [Output for CephStorageUpgradeInitDeployment] ***************************** >2018-10-02 08:30:14,277 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.699) 0:01:27.011 ******* >2018-10-02 08:30:14,334 p=1004 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:13,799] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f4a9b025-655a-427a-a7c2-347998d6a8e2.json", > "[2018-10-02 08:30:13,827] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:13,828] (heat-config) [DEBUG] [2018-10-02 08:30:13,820] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", > "[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-53btivfojecp-0-r67qhgbpx2gg-CephStorageUpgradeInitDeployment-2lzdwfq2tney/06ff8c40-4da9-46e6-b48e-dfe40316d88f", > "[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:30:13,821] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:30:13,821] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f4a9b025-655a-427a-a7c2-347998d6a8e2", > "[2018-10-02 08:30:13,824] (heat-config) [INFO] ", > "[2018-10-02 08:30:13,824] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:13,824] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f4a9b025-655a-427a-a7c2-347998d6a8e2", > "", > "[2018-10-02 08:30:13,828] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:30:13,828] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f4a9b025-655a-427a-a7c2-347998d6a8e2.json < /var/lib/heat-config/deployed/f4a9b025-655a-427a-a7c2-347998d6a8e2.notify.json", > "[2018-10-02 08:30:14,225] (heat-config) [INFO] ", > "[2018-10-02 08:30:14,226] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:14,360 p=1004 u=mistral | TASK [Check-mode for Run deployment CephStorageUpgradeInitDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:14,360 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.082) 0:01:27.093 ******* >2018-10-02 08:30:14,375 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:14,397 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:14,397 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.036) 0:01:27.130 ******* >2018-10-02 08:30:14,498 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "1319374f-667e-409a-9f36-a9d27f6cf160"}, "changed": false} >2018-10-02 08:30:14,522 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:14,522 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.125) 0:01:27.256 ******* >2018-10-02 08:30:14,629 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 08:30:14,652 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:14,652 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.129) 0:01:27.385 ******* >2018-10-02 08:30:14,671 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:14,692 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:14,693 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.040) 0:01:27.426 ******* >2018-10-02 08:30:14,711 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:14,733 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:14,733 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.040) 0:01:27.466 ******* >2018-10-02 08:30:14,752 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:14,773 p=1004 u=mistral | TASK [Render deployment file for CephStorageDeployment for check-mode] ********* >2018-10-02 08:30:14,773 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.040) 0:01:27.507 ******* >2018-10-02 08:30:14,790 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:14,810 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:14,810 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.036) 0:01:27.544 ******* >2018-10-02 08:30:14,828 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:14,847 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:14,848 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.037) 0:01:27.581 ******* >2018-10-02 08:30:14,871 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:14,894 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:14,895 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.047) 0:01:27.628 ******* >2018-10-02 08:30:14,919 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:14,940 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:14,940 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.045) 0:01:27.674 ******* >2018-10-02 08:30:14,961 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:14,982 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:14,982 p=1004 u=mistral | Tuesday 02 October 2018 08:30:14 -0400 (0:00:00.041) 0:01:27.715 ******* >2018-10-02 08:30:15,001 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:15,022 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:15,022 p=1004 u=mistral | Tuesday 02 October 2018 08:30:15 -0400 (0:00:00.040) 0:01:27.755 ******* >2018-10-02 08:30:15,041 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:15,066 p=1004 u=mistral | TASK [Render deployment file for CephStorageDeployment] ************************ >2018-10-02 08:30:15,066 p=1004 u=mistral | Tuesday 02 October 2018 08:30:15 -0400 (0:00:00.044) 0:01:27.800 ******* >2018-10-02 08:30:15,743 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "64399d51799c671887aa63a05b8fc01f11a1041e", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageDeployment-1319374f-667e-409a-9f36-a9d27f6cf160", "gid": 0, "group": "root", "md5sum": "27246b9be48439ac092e3b92857f027f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9081, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483415.24-237402266578748/source", "state": "file", "uid": 0} >2018-10-02 08:30:15,766 p=1004 u=mistral | TASK [Check if deployed file exists for CephStorageDeployment] ***************** >2018-10-02 08:30:15,766 p=1004 u=mistral | Tuesday 02 October 2018 08:30:15 -0400 (0:00:00.699) 0:01:28.499 ******* >2018-10-02 08:30:16,032 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:16,102 p=1004 u=mistral | TASK [Check previous deployment rc for CephStorageDeployment] ****************** >2018-10-02 08:30:16,102 p=1004 u=mistral | Tuesday 02 October 2018 08:30:16 -0400 (0:00:00.336) 0:01:28.836 ******* >2018-10-02 08:30:16,121 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:16,143 p=1004 u=mistral | TASK [Remove deployed file for CephStorageDeployment when previous deployment failed] *** >2018-10-02 08:30:16,143 p=1004 u=mistral | Tuesday 02 October 2018 08:30:16 -0400 (0:00:00.040) 0:01:28.876 ******* >2018-10-02 08:30:16,164 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:16,186 p=1004 u=mistral | TASK [Force remove deployed file for CephStorageDeployment] ******************** >2018-10-02 08:30:16,186 p=1004 u=mistral | Tuesday 02 October 2018 08:30:16 -0400 (0:00:00.042) 0:01:28.919 ******* >2018-10-02 08:30:16,204 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:16,226 p=1004 u=mistral | TASK [Run deployment CephStorageDeployment] ************************************ >2018-10-02 08:30:16,226 p=1004 u=mistral | Tuesday 02 October 2018 08:30:16 -0400 (0:00:00.040) 0:01:28.960 ******* >2018-10-02 08:30:16,972 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/1319374f-667e-409a-9f36-a9d27f6cf160.notify.json)", "delta": "0:00:00.544919", "end": "2018-10-02 08:30:16.950969", "rc": 0, "start": "2018-10-02 08:30:16.406050", "stderr": "[2018-10-02 08:30:16,433] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/1319374f-667e-409a-9f36-a9d27f6cf160.json\n[2018-10-02 08:30:16,559] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:16,560] (heat-config) [DEBUG] \n[2018-10-02 08:30:16,560] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 08:30:16,560] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1319374f-667e-409a-9f36-a9d27f6cf160.json < /var/lib/heat-config/deployed/1319374f-667e-409a-9f36-a9d27f6cf160.notify.json\n[2018-10-02 08:30:16,944] (heat-config) [INFO] \n[2018-10-02 08:30:16,945] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:16,433] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/1319374f-667e-409a-9f36-a9d27f6cf160.json", "[2018-10-02 08:30:16,559] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:16,560] (heat-config) [DEBUG] ", "[2018-10-02 08:30:16,560] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 08:30:16,560] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1319374f-667e-409a-9f36-a9d27f6cf160.json < /var/lib/heat-config/deployed/1319374f-667e-409a-9f36-a9d27f6cf160.notify.json", "[2018-10-02 08:30:16,944] (heat-config) [INFO] ", "[2018-10-02 08:30:16,945] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:16,996 p=1004 u=mistral | TASK [Output for CephStorageDeployment] **************************************** >2018-10-02 08:30:16,996 p=1004 u=mistral | Tuesday 02 October 2018 08:30:16 -0400 (0:00:00.769) 0:01:29.729 ******* >2018-10-02 08:30:17,054 p=1004 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:16,433] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/1319374f-667e-409a-9f36-a9d27f6cf160.json", > "[2018-10-02 08:30:16,559] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:16,560] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:16,560] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 08:30:16,560] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1319374f-667e-409a-9f36-a9d27f6cf160.json < /var/lib/heat-config/deployed/1319374f-667e-409a-9f36-a9d27f6cf160.notify.json", > "[2018-10-02 08:30:16,944] (heat-config) [INFO] ", > "[2018-10-02 08:30:16,945] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:17,079 p=1004 u=mistral | TASK [Check-mode for Run deployment CephStorageDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:17,080 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.083) 0:01:29.813 ******* >2018-10-02 08:30:17,095 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:17,116 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:17,116 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.036) 0:01:29.850 ******* >2018-10-02 08:30:17,178 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "fd38edc1-68ce-4fb9-8960-83b6776a9903"}, "changed": false} >2018-10-02 08:30:17,200 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:17,200 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.083) 0:01:29.934 ******* >2018-10-02 08:30:17,262 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:30:17,284 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:17,284 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.083) 0:01:30.018 ******* >2018-10-02 08:30:17,305 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:17,325 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:17,325 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.040) 0:01:30.058 ******* >2018-10-02 08:30:17,342 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:17,362 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:17,362 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.036) 0:01:30.095 ******* >2018-10-02 08:30:17,379 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:17,399 p=1004 u=mistral | TASK [Render deployment file for CephStorageHostsDeployment for check-mode] **** >2018-10-02 08:30:17,400 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.037) 0:01:30.133 ******* >2018-10-02 08:30:17,416 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:17,435 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:17,435 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.035) 0:01:30.169 ******* >2018-10-02 08:30:17,453 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:17,472 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:17,472 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.036) 0:01:30.205 ******* >2018-10-02 08:30:17,489 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:17,511 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:17,511 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.038) 0:01:30.244 ******* >2018-10-02 08:30:17,532 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:17,553 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:17,554 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.042) 0:01:30.287 ******* >2018-10-02 08:30:17,576 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:17,598 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:17,598 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.044) 0:01:30.331 ******* >2018-10-02 08:30:17,617 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:17,639 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:17,639 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.041) 0:01:30.373 ******* >2018-10-02 08:30:17,657 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:17,680 p=1004 u=mistral | TASK [Render deployment file for CephStorageHostsDeployment] ******************* >2018-10-02 08:30:17,681 p=1004 u=mistral | Tuesday 02 October 2018 08:30:17 -0400 (0:00:00.041) 0:01:30.414 ******* >2018-10-02 08:30:18,218 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "7cd463e83a4974b6407dff29a06d36afbfa4df51", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostsDeployment-fd38edc1-68ce-4fb9-8960-83b6776a9903", "gid": 0, "group": "root", "md5sum": "529713ec07fb1d229721d1ce35a3d085", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4432, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483417.74-124032050422737/source", "state": "file", "uid": 0} >2018-10-02 08:30:18,240 p=1004 u=mistral | TASK [Check if deployed file exists for CephStorageHostsDeployment] ************ >2018-10-02 08:30:18,241 p=1004 u=mistral | Tuesday 02 October 2018 08:30:18 -0400 (0:00:00.560) 0:01:30.974 ******* >2018-10-02 08:30:18,431 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:18,456 p=1004 u=mistral | TASK [Check previous deployment rc for CephStorageHostsDeployment] ************* >2018-10-02 08:30:18,456 p=1004 u=mistral | Tuesday 02 October 2018 08:30:18 -0400 (0:00:00.215) 0:01:31.189 ******* >2018-10-02 08:30:18,476 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:18,498 p=1004 u=mistral | TASK [Remove deployed file for CephStorageHostsDeployment when previous deployment failed] *** >2018-10-02 08:30:18,498 p=1004 u=mistral | Tuesday 02 October 2018 08:30:18 -0400 (0:00:00.042) 0:01:31.231 ******* >2018-10-02 08:30:18,519 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:18,541 p=1004 u=mistral | TASK [Force remove deployed file for CephStorageHostsDeployment] *************** >2018-10-02 08:30:18,542 p=1004 u=mistral | Tuesday 02 October 2018 08:30:18 -0400 (0:00:00.043) 0:01:31.275 ******* >2018-10-02 08:30:18,559 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:18,582 p=1004 u=mistral | TASK [Run deployment CephStorageHostsDeployment] ******************************* >2018-10-02 08:30:18,583 p=1004 u=mistral | Tuesday 02 October 2018 08:30:18 -0400 (0:00:00.041) 0:01:31.316 ******* >2018-10-02 08:30:19,295 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/fd38edc1-68ce-4fb9-8960-83b6776a9903.notify.json)", "delta": "0:00:00.470098", "end": "2018-10-02 08:30:19.244026", "rc": 0, "start": "2018-10-02 08:30:18.773928", "stderr": "[2018-10-02 08:30:18,797] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/fd38edc1-68ce-4fb9-8960-83b6776a9903.json\n[2018-10-02 08:30:18,856] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:18,856] (heat-config) [DEBUG] [2018-10-02 08:30:18,822] (heat-config) [INFO] hosts=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866\n[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-walq5zsmp6ht-0-y2ifaucce3rz/e2e10654-f967-4a3a-a807-7da2921e65a1\n[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:30:18,823] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/fd38edc1-68ce-4fb9-8960-83b6776a9903\n[2018-10-02 08:30:18,852] (heat-config) [INFO] \n[2018-10-02 08:30:18,852] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /ceph-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-10-02 08:30:18,852] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/fd38edc1-68ce-4fb9-8960-83b6776a9903\n\n[2018-10-02 08:30:18,856] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:30:18,857] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fd38edc1-68ce-4fb9-8960-83b6776a9903.json < /var/lib/heat-config/deployed/fd38edc1-68ce-4fb9-8960-83b6776a9903.notify.json\n[2018-10-02 08:30:19,236] (heat-config) [INFO] \n[2018-10-02 08:30:19,237] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:18,797] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/fd38edc1-68ce-4fb9-8960-83b6776a9903.json", "[2018-10-02 08:30:18,856] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:18,856] (heat-config) [DEBUG] [2018-10-02 08:30:18,822] (heat-config) [INFO] hosts=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", "[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-walq5zsmp6ht-0-y2ifaucce3rz/e2e10654-f967-4a3a-a807-7da2921e65a1", "[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:30:18,823] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/fd38edc1-68ce-4fb9-8960-83b6776a9903", "[2018-10-02 08:30:18,852] (heat-config) [INFO] ", "[2018-10-02 08:30:18,852] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /ceph-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-10-02 08:30:18,852] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/fd38edc1-68ce-4fb9-8960-83b6776a9903", "", "[2018-10-02 08:30:18,856] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:30:18,857] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fd38edc1-68ce-4fb9-8960-83b6776a9903.json < /var/lib/heat-config/deployed/fd38edc1-68ce-4fb9-8960-83b6776a9903.notify.json", "[2018-10-02 08:30:19,236] (heat-config) [INFO] ", "[2018-10-02 08:30:19,237] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:19,337 p=1004 u=mistral | TASK [Output for CephStorageHostsDeployment] *********************************** >2018-10-02 08:30:19,338 p=1004 u=mistral | Tuesday 02 October 2018 08:30:19 -0400 (0:00:00.755) 0:01:32.071 ******* >2018-10-02 08:30:19,429 p=1004 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:18,797] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/fd38edc1-68ce-4fb9-8960-83b6776a9903.json", > "[2018-10-02 08:30:18,856] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:18,856] (heat-config) [DEBUG] [2018-10-02 08:30:18,822] (heat-config) [INFO] hosts=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", > "[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-walq5zsmp6ht-0-y2ifaucce3rz/e2e10654-f967-4a3a-a807-7da2921e65a1", > "[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:30:18,822] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:30:18,823] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/fd38edc1-68ce-4fb9-8960-83b6776a9903", > "[2018-10-02 08:30:18,852] (heat-config) [INFO] ", > "[2018-10-02 08:30:18,852] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-10-02 08:30:18,852] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/fd38edc1-68ce-4fb9-8960-83b6776a9903", > "", > "[2018-10-02 08:30:18,856] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:30:18,857] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fd38edc1-68ce-4fb9-8960-83b6776a9903.json < /var/lib/heat-config/deployed/fd38edc1-68ce-4fb9-8960-83b6776a9903.notify.json", > "[2018-10-02 08:30:19,236] (heat-config) [INFO] ", > "[2018-10-02 08:30:19,237] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:19,471 p=1004 u=mistral | TASK [Check-mode for Run deployment CephStorageHostsDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:19,471 p=1004 u=mistral | Tuesday 02 October 2018 08:30:19 -0400 (0:00:00.133) 0:01:32.205 ******* >2018-10-02 08:30:19,489 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:19,511 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:19,511 p=1004 u=mistral | Tuesday 02 October 2018 08:30:19 -0400 (0:00:00.039) 0:01:32.244 ******* >2018-10-02 08:30:19,679 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "4647602e-01fd-4b3f-a518-ff34910b6a33"}, "changed": false} >2018-10-02 08:30:19,701 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:19,701 p=1004 u=mistral | Tuesday 02 October 2018 08:30:19 -0400 (0:00:00.190) 0:01:32.434 ******* >2018-10-02 08:30:19,857 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 08:30:19,879 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:19,879 p=1004 u=mistral | Tuesday 02 October 2018 08:30:19 -0400 (0:00:00.177) 0:01:32.612 ******* >2018-10-02 08:30:19,899 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:19,919 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:19,919 p=1004 u=mistral | Tuesday 02 October 2018 08:30:19 -0400 (0:00:00.040) 0:01:32.652 ******* >2018-10-02 08:30:19,936 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:19,957 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:19,957 p=1004 u=mistral | Tuesday 02 October 2018 08:30:19 -0400 (0:00:00.038) 0:01:32.691 ******* >2018-10-02 08:30:19,979 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:20,003 p=1004 u=mistral | TASK [Render deployment file for CephStorageAllNodesDeployment for check-mode] *** >2018-10-02 08:30:20,004 p=1004 u=mistral | Tuesday 02 October 2018 08:30:20 -0400 (0:00:00.046) 0:01:32.737 ******* >2018-10-02 08:30:20,021 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:20,042 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:20,043 p=1004 u=mistral | Tuesday 02 October 2018 08:30:20 -0400 (0:00:00.039) 0:01:32.776 ******* >2018-10-02 08:30:20,063 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:20,083 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:20,083 p=1004 u=mistral | Tuesday 02 October 2018 08:30:20 -0400 (0:00:00.040) 0:01:32.817 ******* >2018-10-02 08:30:20,102 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:20,123 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:20,123 p=1004 u=mistral | Tuesday 02 October 2018 08:30:20 -0400 (0:00:00.039) 0:01:32.857 ******* >2018-10-02 08:30:20,145 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:20,166 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:20,166 p=1004 u=mistral | Tuesday 02 October 2018 08:30:20 -0400 (0:00:00.042) 0:01:32.899 ******* >2018-10-02 08:30:20,186 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:20,206 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:20,207 p=1004 u=mistral | Tuesday 02 October 2018 08:30:20 -0400 (0:00:00.040) 0:01:32.940 ******* >2018-10-02 08:30:20,225 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:20,245 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:20,245 p=1004 u=mistral | Tuesday 02 October 2018 08:30:20 -0400 (0:00:00.038) 0:01:32.979 ******* >2018-10-02 08:30:20,263 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:20,285 p=1004 u=mistral | TASK [Render deployment file for CephStorageAllNodesDeployment] **************** >2018-10-02 08:30:20,285 p=1004 u=mistral | Tuesday 02 October 2018 08:30:20 -0400 (0:00:00.039) 0:01:33.019 ******* >2018-10-02 08:30:20,923 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "8077b84090f708d19fa87a8fb04988565052209f", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesDeployment-4647602e-01fd-4b3f-a518-ff34910b6a33", "gid": 0, "group": "root", "md5sum": "b6343cc7de6c0b14af43e9cca5d7bda4", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19537, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483420.45-262415433084213/source", "state": "file", "uid": 0} >2018-10-02 08:30:20,945 p=1004 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesDeployment] ********* >2018-10-02 08:30:20,945 p=1004 u=mistral | Tuesday 02 October 2018 08:30:20 -0400 (0:00:00.659) 0:01:33.679 ******* >2018-10-02 08:30:21,137 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:21,160 p=1004 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesDeployment] ********** >2018-10-02 08:30:21,161 p=1004 u=mistral | Tuesday 02 October 2018 08:30:21 -0400 (0:00:00.215) 0:01:33.894 ******* >2018-10-02 08:30:21,178 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:21,202 p=1004 u=mistral | TASK [Remove deployed file for CephStorageAllNodesDeployment when previous deployment failed] *** >2018-10-02 08:30:21,202 p=1004 u=mistral | Tuesday 02 October 2018 08:30:21 -0400 (0:00:00.041) 0:01:33.935 ******* >2018-10-02 08:30:21,222 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:21,243 p=1004 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesDeployment] ************ >2018-10-02 08:30:21,243 p=1004 u=mistral | Tuesday 02 October 2018 08:30:21 -0400 (0:00:00.040) 0:01:33.976 ******* >2018-10-02 08:30:21,260 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:21,279 p=1004 u=mistral | TASK [Run deployment CephStorageAllNodesDeployment] **************************** >2018-10-02 08:30:21,279 p=1004 u=mistral | Tuesday 02 October 2018 08:30:21 -0400 (0:00:00.036) 0:01:34.012 ******* >2018-10-02 08:30:22,029 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/4647602e-01fd-4b3f-a518-ff34910b6a33.notify.json)", "delta": "0:00:00.557860", "end": "2018-10-02 08:30:22.009094", "rc": 0, "start": "2018-10-02 08:30:21.451234", "stderr": "[2018-10-02 08:30:21,478] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/4647602e-01fd-4b3f-a518-ff34910b6a33.json\n[2018-10-02 08:30:21,610] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:21,610] (heat-config) [DEBUG] \n[2018-10-02 08:30:21,610] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 08:30:21,610] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4647602e-01fd-4b3f-a518-ff34910b6a33.json < /var/lib/heat-config/deployed/4647602e-01fd-4b3f-a518-ff34910b6a33.notify.json\n[2018-10-02 08:30:22,002] (heat-config) [INFO] \n[2018-10-02 08:30:22,003] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:21,478] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/4647602e-01fd-4b3f-a518-ff34910b6a33.json", "[2018-10-02 08:30:21,610] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:21,610] (heat-config) [DEBUG] ", "[2018-10-02 08:30:21,610] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 08:30:21,610] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4647602e-01fd-4b3f-a518-ff34910b6a33.json < /var/lib/heat-config/deployed/4647602e-01fd-4b3f-a518-ff34910b6a33.notify.json", "[2018-10-02 08:30:22,002] (heat-config) [INFO] ", "[2018-10-02 08:30:22,003] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:22,053 p=1004 u=mistral | TASK [Output for CephStorageAllNodesDeployment] ******************************** >2018-10-02 08:30:22,054 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.774) 0:01:34.787 ******* >2018-10-02 08:30:22,102 p=1004 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:21,478] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/4647602e-01fd-4b3f-a518-ff34910b6a33.json", > "[2018-10-02 08:30:21,610] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:21,610] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:21,610] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 08:30:21,610] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4647602e-01fd-4b3f-a518-ff34910b6a33.json < /var/lib/heat-config/deployed/4647602e-01fd-4b3f-a518-ff34910b6a33.notify.json", > "[2018-10-02 08:30:22,002] (heat-config) [INFO] ", > "[2018-10-02 08:30:22,003] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:22,125 p=1004 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:22,126 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.072) 0:01:34.859 ******* >2018-10-02 08:30:22,141 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:22,163 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:22,163 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.037) 0:01:34.897 ******* >2018-10-02 08:30:22,227 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "34288f42-a8b4-4d07-b499-08f8f636ffc3"}, "changed": false} >2018-10-02 08:30:22,249 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:22,249 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.086) 0:01:34.983 ******* >2018-10-02 08:30:22,317 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:30:22,339 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:22,340 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.090) 0:01:35.073 ******* >2018-10-02 08:30:22,359 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:22,380 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:22,380 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.040) 0:01:35.114 ******* >2018-10-02 08:30:22,399 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:22,421 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:22,421 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.040) 0:01:35.154 ******* >2018-10-02 08:30:22,445 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:22,471 p=1004 u=mistral | TASK [Render deployment file for CephStorageAllNodesValidationDeployment for check-mode] *** >2018-10-02 08:30:22,471 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.050) 0:01:35.204 ******* >2018-10-02 08:30:22,489 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:22,512 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:22,512 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.041) 0:01:35.245 ******* >2018-10-02 08:30:22,531 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:22,552 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:22,552 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.040) 0:01:35.285 ******* >2018-10-02 08:30:22,570 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:22,592 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:22,592 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.039) 0:01:35.325 ******* >2018-10-02 08:30:22,614 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:22,635 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:22,635 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.043) 0:01:35.368 ******* >2018-10-02 08:30:22,656 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:22,675 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:22,675 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.040) 0:01:35.409 ******* >2018-10-02 08:30:22,692 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:22,714 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:22,714 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.038) 0:01:35.447 ******* >2018-10-02 08:30:22,732 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:22,753 p=1004 u=mistral | TASK [Render deployment file for CephStorageAllNodesValidationDeployment] ****** >2018-10-02 08:30:22,753 p=1004 u=mistral | Tuesday 02 October 2018 08:30:22 -0400 (0:00:00.039) 0:01:35.487 ******* >2018-10-02 08:30:23,317 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f095996e4535bcfffa3a1c987c5473973e218fcf", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesValidationDeployment-34288f42-a8b4-4d07-b499-08f8f636ffc3", "gid": 0, "group": "root", "md5sum": "85e5a478b9bca13554816a5c467aff09", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4943, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483422.82-96487356637163/source", "state": "file", "uid": 0} >2018-10-02 08:30:23,339 p=1004 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesValidationDeployment] *** >2018-10-02 08:30:23,340 p=1004 u=mistral | Tuesday 02 October 2018 08:30:23 -0400 (0:00:00.586) 0:01:36.073 ******* >2018-10-02 08:30:23,544 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:23,568 p=1004 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesValidationDeployment] *** >2018-10-02 08:30:23,568 p=1004 u=mistral | Tuesday 02 October 2018 08:30:23 -0400 (0:00:00.228) 0:01:36.301 ******* >2018-10-02 08:30:23,586 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:23,608 p=1004 u=mistral | TASK [Remove deployed file for CephStorageAllNodesValidationDeployment when previous deployment failed] *** >2018-10-02 08:30:23,608 p=1004 u=mistral | Tuesday 02 October 2018 08:30:23 -0400 (0:00:00.040) 0:01:36.342 ******* >2018-10-02 08:30:23,629 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:23,652 p=1004 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesValidationDeployment] *** >2018-10-02 08:30:23,652 p=1004 u=mistral | Tuesday 02 October 2018 08:30:23 -0400 (0:00:00.043) 0:01:36.385 ******* >2018-10-02 08:30:23,670 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:23,692 p=1004 u=mistral | TASK [Run deployment CephStorageAllNodesValidationDeployment] ****************** >2018-10-02 08:30:23,692 p=1004 u=mistral | Tuesday 02 October 2018 08:30:23 -0400 (0:00:00.040) 0:01:36.426 ******* >2018-10-02 08:30:24,917 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/34288f42-a8b4-4d07-b499-08f8f636ffc3.notify.json)", "delta": "0:00:01.016490", "end": "2018-10-02 08:30:24.897042", "rc": 0, "start": "2018-10-02 08:30:23.880552", "stderr": "[2018-10-02 08:30:23,906] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/34288f42-a8b4-4d07-b499-08f8f636ffc3.json\n[2018-10-02 08:30:24,508] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\\nPing to 172.17.3.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.31 for local network 172.17.4.0/24.\\nPing to 172.17.4.31 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\\nPing to 192.168.24.10 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:24,508] (heat-config) [DEBUG] [2018-10-02 08:30:23,928] (heat-config) [INFO] ping_test_ips=172.17.3.15 172.17.4.31 172.17.1.20 172.17.2.19 10.0.0.104 192.168.24.10\n[2018-10-02 08:30:23,929] (heat-config) [INFO] validate_fqdn=False\n[2018-10-02 08:30:23,929] (heat-config) [INFO] validate_ntp=True\n[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866\n[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-jwdnjbeivdh4-0-7o6rz7l2we4s/975fba26-3be6-4213-8f5a-4ec1968d760a\n[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:30:23,929] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/34288f42-a8b4-4d07-b499-08f8f636ffc3\n[2018-10-02 08:30:24,504] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\nPing to 10.0.0.104 succeeded.\nSUCCESS\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\nPing to 172.17.3.15 succeeded.\nSUCCESS\nTrying to ping 172.17.4.31 for local network 172.17.4.0/24.\nPing to 172.17.4.31 succeeded.\nSUCCESS\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\nPing to 192.168.24.10 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-10-02 08:30:24,504] (heat-config) [DEBUG] \n[2018-10-02 08:30:24,504] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/34288f42-a8b4-4d07-b499-08f8f636ffc3\n\n[2018-10-02 08:30:24,508] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:30:24,509] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/34288f42-a8b4-4d07-b499-08f8f636ffc3.json < /var/lib/heat-config/deployed/34288f42-a8b4-4d07-b499-08f8f636ffc3.notify.json\n[2018-10-02 08:30:24,891] (heat-config) [INFO] \n[2018-10-02 08:30:24,891] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:23,906] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/34288f42-a8b4-4d07-b499-08f8f636ffc3.json", "[2018-10-02 08:30:24,508] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\\nPing to 172.17.3.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.31 for local network 172.17.4.0/24.\\nPing to 172.17.4.31 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\\nPing to 192.168.24.10 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:24,508] (heat-config) [DEBUG] [2018-10-02 08:30:23,928] (heat-config) [INFO] ping_test_ips=172.17.3.15 172.17.4.31 172.17.1.20 172.17.2.19 10.0.0.104 192.168.24.10", "[2018-10-02 08:30:23,929] (heat-config) [INFO] validate_fqdn=False", "[2018-10-02 08:30:23,929] (heat-config) [INFO] validate_ntp=True", "[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", "[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-jwdnjbeivdh4-0-7o6rz7l2we4s/975fba26-3be6-4213-8f5a-4ec1968d760a", "[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:30:23,929] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/34288f42-a8b4-4d07-b499-08f8f636ffc3", "[2018-10-02 08:30:24,504] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", "Ping to 10.0.0.104 succeeded.", "SUCCESS", "Trying to ping 172.17.3.15 for local network 172.17.3.0/24.", "Ping to 172.17.3.15 succeeded.", "SUCCESS", "Trying to ping 172.17.4.31 for local network 172.17.4.0/24.", "Ping to 172.17.4.31 succeeded.", "SUCCESS", "Trying to ping 192.168.24.10 for local network 192.168.24.0/24.", "Ping to 192.168.24.10 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-10-02 08:30:24,504] (heat-config) [DEBUG] ", "[2018-10-02 08:30:24,504] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/34288f42-a8b4-4d07-b499-08f8f636ffc3", "", "[2018-10-02 08:30:24,508] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:30:24,509] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/34288f42-a8b4-4d07-b499-08f8f636ffc3.json < /var/lib/heat-config/deployed/34288f42-a8b4-4d07-b499-08f8f636ffc3.notify.json", "[2018-10-02 08:30:24,891] (heat-config) [INFO] ", "[2018-10-02 08:30:24,891] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:24,941 p=1004 u=mistral | TASK [Output for CephStorageAllNodesValidationDeployment] ********************** >2018-10-02 08:30:24,941 p=1004 u=mistral | Tuesday 02 October 2018 08:30:24 -0400 (0:00:01.248) 0:01:37.674 ******* >2018-10-02 08:30:25,063 p=1004 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:23,906] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/34288f42-a8b4-4d07-b499-08f8f636ffc3.json", > "[2018-10-02 08:30:24,508] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\\nPing to 172.17.3.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.31 for local network 172.17.4.0/24.\\nPing to 172.17.4.31 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\\nPing to 192.168.24.10 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:24,508] (heat-config) [DEBUG] [2018-10-02 08:30:23,928] (heat-config) [INFO] ping_test_ips=172.17.3.15 172.17.4.31 172.17.1.20 172.17.2.19 10.0.0.104 192.168.24.10", > "[2018-10-02 08:30:23,929] (heat-config) [INFO] validate_fqdn=False", > "[2018-10-02 08:30:23,929] (heat-config) [INFO] validate_ntp=True", > "[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", > "[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-jwdnjbeivdh4-0-7o6rz7l2we4s/975fba26-3be6-4213-8f5a-4ec1968d760a", > "[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:30:23,929] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:30:23,929] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/34288f42-a8b4-4d07-b499-08f8f636ffc3", > "[2018-10-02 08:30:24,504] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", > "Ping to 10.0.0.104 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.15 for local network 172.17.3.0/24.", > "Ping to 172.17.3.15 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.31 for local network 172.17.4.0/24.", > "Ping to 172.17.4.31 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.10 for local network 192.168.24.0/24.", > "Ping to 192.168.24.10 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-10-02 08:30:24,504] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:24,504] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/34288f42-a8b4-4d07-b499-08f8f636ffc3", > "", > "[2018-10-02 08:30:24,508] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:30:24,509] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/34288f42-a8b4-4d07-b499-08f8f636ffc3.json < /var/lib/heat-config/deployed/34288f42-a8b4-4d07-b499-08f8f636ffc3.notify.json", > "[2018-10-02 08:30:24,891] (heat-config) [INFO] ", > "[2018-10-02 08:30:24,891] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:25,087 p=1004 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesValidationDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:25,087 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.146) 0:01:37.820 ******* >2018-10-02 08:30:25,102 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:25,122 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:25,122 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.034) 0:01:37.855 ******* >2018-10-02 08:30:25,256 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "939fe603-f05a-41a6-94ad-e2e363fd574d"}, "changed": false} >2018-10-02 08:30:25,276 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:25,276 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.153) 0:01:38.009 ******* >2018-10-02 08:30:25,413 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "ansible"}, "changed": false} >2018-10-02 08:30:25,432 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:25,432 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.156) 0:01:38.166 ******* >2018-10-02 08:30:25,454 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:25,476 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:25,476 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.044) 0:01:38.210 ******* >2018-10-02 08:30:25,495 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:25,570 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:25,570 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.093) 0:01:38.304 ******* >2018-10-02 08:30:25,589 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:25,610 p=1004 u=mistral | TASK [Render deployment file for CephStorageHostPrepDeployment for check-mode] *** >2018-10-02 08:30:25,610 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.040) 0:01:38.344 ******* >2018-10-02 08:30:25,628 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:25,648 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:25,648 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.037) 0:01:38.382 ******* >2018-10-02 08:30:25,665 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:25,684 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:25,684 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.036) 0:01:38.418 ******* >2018-10-02 08:30:25,702 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:25,725 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:25,725 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.040) 0:01:38.458 ******* >2018-10-02 08:30:25,747 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:25,768 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:25,768 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.043) 0:01:38.502 ******* >2018-10-02 08:30:25,789 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:25,812 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:25,812 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.043) 0:01:38.545 ******* >2018-10-02 08:30:25,831 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:25,853 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:25,853 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.040) 0:01:38.586 ******* >2018-10-02 08:30:25,870 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:25,896 p=1004 u=mistral | TASK [Render deployment file for CephStorageHostPrepDeployment] **************** >2018-10-02 08:30:25,896 p=1004 u=mistral | Tuesday 02 October 2018 08:30:25 -0400 (0:00:00.042) 0:01:38.629 ******* >2018-10-02 08:30:26,457 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "4f908fd5f704492d889360fe061254559b43432f", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostPrepDeployment-939fe603-f05a-41a6-94ad-e2e363fd574d", "gid": 0, "group": "root", "md5sum": "d1adfc7b08150a844488a0f73ff68185", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21380, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483425.97-277663645356362/source", "state": "file", "uid": 0} >2018-10-02 08:30:26,478 p=1004 u=mistral | TASK [Check if deployed file exists for CephStorageHostPrepDeployment] ********* >2018-10-02 08:30:26,479 p=1004 u=mistral | Tuesday 02 October 2018 08:30:26 -0400 (0:00:00.582) 0:01:39.212 ******* >2018-10-02 08:30:26,693 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:26,716 p=1004 u=mistral | TASK [Check previous deployment rc for CephStorageHostPrepDeployment] ********** >2018-10-02 08:30:26,716 p=1004 u=mistral | Tuesday 02 October 2018 08:30:26 -0400 (0:00:00.237) 0:01:39.449 ******* >2018-10-02 08:30:26,733 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:26,755 p=1004 u=mistral | TASK [Remove deployed file for CephStorageHostPrepDeployment when previous deployment failed] *** >2018-10-02 08:30:26,755 p=1004 u=mistral | Tuesday 02 October 2018 08:30:26 -0400 (0:00:00.038) 0:01:39.488 ******* >2018-10-02 08:30:26,773 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:26,794 p=1004 u=mistral | TASK [Force remove deployed file for CephStorageHostPrepDeployment] ************ >2018-10-02 08:30:26,794 p=1004 u=mistral | Tuesday 02 October 2018 08:30:26 -0400 (0:00:00.039) 0:01:39.527 ******* >2018-10-02 08:30:26,811 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:26,836 p=1004 u=mistral | TASK [Run deployment CephStorageHostPrepDeployment] **************************** >2018-10-02 08:30:26,836 p=1004 u=mistral | Tuesday 02 October 2018 08:30:26 -0400 (0:00:00.042) 0:01:39.570 ******* >2018-10-02 08:30:33,079 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/939fe603-f05a-41a6-94ad-e2e363fd574d.notify.json)", "delta": "0:00:06.022456", "end": "2018-10-02 08:30:33.053765", "rc": 0, "start": "2018-10-02 08:30:27.031309", "stderr": "[2018-10-02 08:30:27,060] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/939fe603-f05a-41a6-94ad-e2e363fd574d.json\n[2018-10-02 08:30:32,647] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:32,647] (heat-config) [DEBUG] [2018-10-02 08:30:27,085] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/939fe603-f05a-41a6-94ad-e2e363fd574d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/939fe603-f05a-41a6-94ad-e2e363fd574d_variables.json\n[2018-10-02 08:30:32,643] (heat-config) [INFO] Return code 0\n[2018-10-02 08:30:32,643] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-10-02 08:30:32,643] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/939fe603-f05a-41a6-94ad-e2e363fd574d_playbook.yaml\n\n[2018-10-02 08:30:32,647] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-10-02 08:30:32,648] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/939fe603-f05a-41a6-94ad-e2e363fd574d.json < /var/lib/heat-config/deployed/939fe603-f05a-41a6-94ad-e2e363fd574d.notify.json\n[2018-10-02 08:30:33,046] (heat-config) [INFO] \n[2018-10-02 08:30:33,046] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:27,060] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/939fe603-f05a-41a6-94ad-e2e363fd574d.json", "[2018-10-02 08:30:32,647] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:32,647] (heat-config) [DEBUG] [2018-10-02 08:30:27,085] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/939fe603-f05a-41a6-94ad-e2e363fd574d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/939fe603-f05a-41a6-94ad-e2e363fd574d_variables.json", "[2018-10-02 08:30:32,643] (heat-config) [INFO] Return code 0", "[2018-10-02 08:30:32,643] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-10-02 08:30:32,643] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/939fe603-f05a-41a6-94ad-e2e363fd574d_playbook.yaml", "", "[2018-10-02 08:30:32,647] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-10-02 08:30:32,648] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/939fe603-f05a-41a6-94ad-e2e363fd574d.json < /var/lib/heat-config/deployed/939fe603-f05a-41a6-94ad-e2e363fd574d.notify.json", "[2018-10-02 08:30:33,046] (heat-config) [INFO] ", "[2018-10-02 08:30:33,046] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:33,103 p=1004 u=mistral | TASK [Output for CephStorageHostPrepDeployment] ******************************** >2018-10-02 08:30:33,103 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:06.266) 0:01:45.836 ******* >2018-10-02 08:30:33,160 p=1004 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:27,060] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/939fe603-f05a-41a6-94ad-e2e363fd574d.json", > "[2018-10-02 08:30:32,647] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:32,647] (heat-config) [DEBUG] [2018-10-02 08:30:27,085] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/939fe603-f05a-41a6-94ad-e2e363fd574d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/939fe603-f05a-41a6-94ad-e2e363fd574d_variables.json", > "[2018-10-02 08:30:32,643] (heat-config) [INFO] Return code 0", > "[2018-10-02 08:30:32,643] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-10-02 08:30:32,643] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/939fe603-f05a-41a6-94ad-e2e363fd574d_playbook.yaml", > "", > "[2018-10-02 08:30:32,647] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-10-02 08:30:32,648] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/939fe603-f05a-41a6-94ad-e2e363fd574d.json < /var/lib/heat-config/deployed/939fe603-f05a-41a6-94ad-e2e363fd574d.notify.json", > "[2018-10-02 08:30:33,046] (heat-config) [INFO] ", > "[2018-10-02 08:30:33,046] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:33,185 p=1004 u=mistral | TASK [Check-mode for Run deployment CephStorageHostPrepDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:33,185 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.081) 0:01:45.918 ******* >2018-10-02 08:30:33,201 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:33,221 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:33,222 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.036) 0:01:45.955 ******* >2018-10-02 08:30:33,280 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "89ad4f02-afaa-4aa7-bc50-427d129b2db9"}, "changed": false} >2018-10-02 08:30:33,301 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:33,301 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.079) 0:01:46.035 ******* >2018-10-02 08:30:33,358 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:30:33,378 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:33,378 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.076) 0:01:46.111 ******* >2018-10-02 08:30:33,399 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:33,420 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:33,420 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.041) 0:01:46.153 ******* >2018-10-02 08:30:33,437 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:33,456 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:33,456 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.035) 0:01:46.189 ******* >2018-10-02 08:30:33,471 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:33,493 p=1004 u=mistral | TASK [Render deployment file for CephStorageArtifactsDeploy for check-mode] **** >2018-10-02 08:30:33,493 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.037) 0:01:46.227 ******* >2018-10-02 08:30:33,509 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:33,527 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:33,527 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.033) 0:01:46.260 ******* >2018-10-02 08:30:33,543 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:33,560 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:33,560 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.033) 0:01:46.294 ******* >2018-10-02 08:30:33,575 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:33,593 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:33,594 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.033) 0:01:46.327 ******* >2018-10-02 08:30:33,611 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:33,629 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:33,629 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.035) 0:01:46.363 ******* >2018-10-02 08:30:33,647 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:33,665 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:33,665 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.035) 0:01:46.398 ******* >2018-10-02 08:30:33,681 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:33,699 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:33,699 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.033) 0:01:46.432 ******* >2018-10-02 08:30:33,719 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:30:33,744 p=1004 u=mistral | TASK [Render deployment file for CephStorageArtifactsDeploy] ******************* >2018-10-02 08:30:33,744 p=1004 u=mistral | Tuesday 02 October 2018 08:30:33 -0400 (0:00:00.045) 0:01:46.477 ******* >2018-10-02 08:30:34,287 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "3c41383c4be83c2bf329d4bf4835a30f7b6a9ff8", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageArtifactsDeploy-89ad4f02-afaa-4aa7-bc50-427d129b2db9", "gid": 0, "group": "root", "md5sum": "85dab0b9e5183538044d8b447b8b8652", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2023, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483433.8-201180491102211/source", "state": "file", "uid": 0} >2018-10-02 08:30:34,311 p=1004 u=mistral | TASK [Check if deployed file exists for CephStorageArtifactsDeploy] ************ >2018-10-02 08:30:34,311 p=1004 u=mistral | Tuesday 02 October 2018 08:30:34 -0400 (0:00:00.566) 0:01:47.044 ******* >2018-10-02 08:30:34,522 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:34,546 p=1004 u=mistral | TASK [Check previous deployment rc for CephStorageArtifactsDeploy] ************* >2018-10-02 08:30:34,546 p=1004 u=mistral | Tuesday 02 October 2018 08:30:34 -0400 (0:00:00.235) 0:01:47.280 ******* >2018-10-02 08:30:34,565 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:34,587 p=1004 u=mistral | TASK [Remove deployed file for CephStorageArtifactsDeploy when previous deployment failed] *** >2018-10-02 08:30:34,587 p=1004 u=mistral | Tuesday 02 October 2018 08:30:34 -0400 (0:00:00.041) 0:01:47.321 ******* >2018-10-02 08:30:34,607 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:34,629 p=1004 u=mistral | TASK [Force remove deployed file for CephStorageArtifactsDeploy] *************** >2018-10-02 08:30:34,629 p=1004 u=mistral | Tuesday 02 October 2018 08:30:34 -0400 (0:00:00.041) 0:01:47.363 ******* >2018-10-02 08:30:34,647 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:34,668 p=1004 u=mistral | TASK [Run deployment CephStorageArtifactsDeploy] ******************************* >2018-10-02 08:30:34,668 p=1004 u=mistral | Tuesday 02 October 2018 08:30:34 -0400 (0:00:00.039) 0:01:47.402 ******* >2018-10-02 08:30:35,360 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/89ad4f02-afaa-4aa7-bc50-427d129b2db9.notify.json)", "delta": "0:00:00.477311", "end": "2018-10-02 08:30:35.338796", "rc": 0, "start": "2018-10-02 08:30:34.861485", "stderr": "[2018-10-02 08:30:34,889] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/89ad4f02-afaa-4aa7-bc50-427d129b2db9.json\n[2018-10-02 08:30:34,925] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:34,925] (heat-config) [DEBUG] [2018-10-02 08:30:34,914] (heat-config) [INFO] artifact_urls=\n[2018-10-02 08:30:34,914] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866\n[2018-10-02 08:30:34,914] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:30:34,914] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-neoosyt67g2y-CephStorageArtifactsDeploy-rwkdaufhycrd-0-3bwieq5mgrwu/306d1475-7817-4c39-b7cd-c8d083d4d44f\n[2018-10-02 08:30:34,915] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:30:34,915] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:30:34,915] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/89ad4f02-afaa-4aa7-bc50-427d129b2db9\n[2018-10-02 08:30:34,921] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-10-02 08:30:34,921] (heat-config) [DEBUG] \n[2018-10-02 08:30:34,921] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/89ad4f02-afaa-4aa7-bc50-427d129b2db9\n\n[2018-10-02 08:30:34,925] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:30:34,925] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/89ad4f02-afaa-4aa7-bc50-427d129b2db9.json < /var/lib/heat-config/deployed/89ad4f02-afaa-4aa7-bc50-427d129b2db9.notify.json\n[2018-10-02 08:30:35,332] (heat-config) [INFO] \n[2018-10-02 08:30:35,333] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:34,889] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/89ad4f02-afaa-4aa7-bc50-427d129b2db9.json", "[2018-10-02 08:30:34,925] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:34,925] (heat-config) [DEBUG] [2018-10-02 08:30:34,914] (heat-config) [INFO] artifact_urls=", "[2018-10-02 08:30:34,914] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", "[2018-10-02 08:30:34,914] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:30:34,914] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-neoosyt67g2y-CephStorageArtifactsDeploy-rwkdaufhycrd-0-3bwieq5mgrwu/306d1475-7817-4c39-b7cd-c8d083d4d44f", "[2018-10-02 08:30:34,915] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:30:34,915] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:30:34,915] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/89ad4f02-afaa-4aa7-bc50-427d129b2db9", "[2018-10-02 08:30:34,921] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-10-02 08:30:34,921] (heat-config) [DEBUG] ", "[2018-10-02 08:30:34,921] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/89ad4f02-afaa-4aa7-bc50-427d129b2db9", "", "[2018-10-02 08:30:34,925] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:30:34,925] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/89ad4f02-afaa-4aa7-bc50-427d129b2db9.json < /var/lib/heat-config/deployed/89ad4f02-afaa-4aa7-bc50-427d129b2db9.notify.json", "[2018-10-02 08:30:35,332] (heat-config) [INFO] ", "[2018-10-02 08:30:35,333] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:35,382 p=1004 u=mistral | TASK [Output for CephStorageArtifactsDeploy] *********************************** >2018-10-02 08:30:35,382 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.714) 0:01:48.116 ******* >2018-10-02 08:30:35,441 p=1004 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:34,889] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/89ad4f02-afaa-4aa7-bc50-427d129b2db9.json", > "[2018-10-02 08:30:34,925] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:34,925] (heat-config) [DEBUG] [2018-10-02 08:30:34,914] (heat-config) [INFO] artifact_urls=", > "[2018-10-02 08:30:34,914] (heat-config) [INFO] deploy_server_id=fe5a200b-5cb5-45d9-ac77-9aa53cfee866", > "[2018-10-02 08:30:34,914] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:30:34,914] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-neoosyt67g2y-CephStorageArtifactsDeploy-rwkdaufhycrd-0-3bwieq5mgrwu/306d1475-7817-4c39-b7cd-c8d083d4d44f", > "[2018-10-02 08:30:34,915] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:30:34,915] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:30:34,915] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/89ad4f02-afaa-4aa7-bc50-427d129b2db9", > "[2018-10-02 08:30:34,921] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-10-02 08:30:34,921] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:34,921] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/89ad4f02-afaa-4aa7-bc50-427d129b2db9", > "", > "[2018-10-02 08:30:34,925] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:30:34,925] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/89ad4f02-afaa-4aa7-bc50-427d129b2db9.json < /var/lib/heat-config/deployed/89ad4f02-afaa-4aa7-bc50-427d129b2db9.notify.json", > "[2018-10-02 08:30:35,332] (heat-config) [INFO] ", > "[2018-10-02 08:30:35,333] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:35,466 p=1004 u=mistral | TASK [Check-mode for Run deployment CephStorageArtifactsDeploy (changed status indicates deployment would run)] *** >2018-10-02 08:30:35,466 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.083) 0:01:48.199 ******* >2018-10-02 08:30:35,482 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:35,504 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:35,505 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.038) 0:01:48.238 ******* >2018-10-02 08:30:35,558 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "0989ab13-9875-4089-9fdb-e5089b74a637"}, "changed": false} >2018-10-02 08:30:35,580 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:35,580 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.075) 0:01:48.314 ******* >2018-10-02 08:30:35,632 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:30:35,654 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:35,654 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.073) 0:01:48.387 ******* >2018-10-02 08:30:35,671 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:35,692 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:35,693 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.038) 0:01:48.426 ******* >2018-10-02 08:30:35,709 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:35,731 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:35,731 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.038) 0:01:48.465 ******* >2018-10-02 08:30:35,755 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:35,779 p=1004 u=mistral | TASK [Render deployment file for NovaComputeUpgradeInitDeployment for check-mode] *** >2018-10-02 08:30:35,780 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.048) 0:01:48.513 ******* >2018-10-02 08:30:35,799 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:35,820 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:35,820 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.040) 0:01:48.553 ******* >2018-10-02 08:30:35,839 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:35,860 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:35,860 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.040) 0:01:48.594 ******* >2018-10-02 08:30:35,878 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:35,899 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:35,899 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.038) 0:01:48.632 ******* >2018-10-02 08:30:35,919 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:35,940 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:35,940 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.040) 0:01:48.673 ******* >2018-10-02 08:30:35,961 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:35,981 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:35,982 p=1004 u=mistral | Tuesday 02 October 2018 08:30:35 -0400 (0:00:00.041) 0:01:48.715 ******* >2018-10-02 08:30:35,999 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:36,020 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:36,020 p=1004 u=mistral | Tuesday 02 October 2018 08:30:36 -0400 (0:00:00.038) 0:01:48.753 ******* >2018-10-02 08:30:36,036 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:36,058 p=1004 u=mistral | TASK [Render deployment file for NovaComputeUpgradeInitDeployment] ************* >2018-10-02 08:30:36,058 p=1004 u=mistral | Tuesday 02 October 2018 08:30:36 -0400 (0:00:00.038) 0:01:48.791 ******* >2018-10-02 08:30:36,605 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "1091434af3f91d80c9457edecf741bcf5ae76e1a", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeUpgradeInitDeployment-0989ab13-9875-4089-9fdb-e5089b74a637", "gid": 0, "group": "root", "md5sum": "902358b7406be8615b4575cabee7cbc1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1182, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483436.11-164701641544765/source", "state": "file", "uid": 0} >2018-10-02 08:30:36,627 p=1004 u=mistral | TASK [Check if deployed file exists for NovaComputeUpgradeInitDeployment] ****** >2018-10-02 08:30:36,627 p=1004 u=mistral | Tuesday 02 October 2018 08:30:36 -0400 (0:00:00.568) 0:01:49.360 ******* >2018-10-02 08:30:36,832 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:36,855 p=1004 u=mistral | TASK [Check previous deployment rc for NovaComputeUpgradeInitDeployment] ******* >2018-10-02 08:30:36,856 p=1004 u=mistral | Tuesday 02 October 2018 08:30:36 -0400 (0:00:00.228) 0:01:49.589 ******* >2018-10-02 08:30:36,874 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:36,898 p=1004 u=mistral | TASK [Remove deployed file for NovaComputeUpgradeInitDeployment when previous deployment failed] *** >2018-10-02 08:30:36,898 p=1004 u=mistral | Tuesday 02 October 2018 08:30:36 -0400 (0:00:00.042) 0:01:49.632 ******* >2018-10-02 08:30:36,922 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:36,944 p=1004 u=mistral | TASK [Force remove deployed file for NovaComputeUpgradeInitDeployment] ********* >2018-10-02 08:30:36,944 p=1004 u=mistral | Tuesday 02 October 2018 08:30:36 -0400 (0:00:00.045) 0:01:49.677 ******* >2018-10-02 08:30:36,962 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:36,984 p=1004 u=mistral | TASK [Run deployment NovaComputeUpgradeInitDeployment] ************************* >2018-10-02 08:30:36,985 p=1004 u=mistral | Tuesday 02 October 2018 08:30:36 -0400 (0:00:00.040) 0:01:49.718 ******* >2018-10-02 08:30:37,683 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/0989ab13-9875-4089-9fdb-e5089b74a637.notify.json)", "delta": "0:00:00.487952", "end": "2018-10-02 08:30:37.660247", "rc": 0, "start": "2018-10-02 08:30:37.172295", "stderr": "[2018-10-02 08:30:37,199] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0989ab13-9875-4089-9fdb-e5089b74a637.json\n[2018-10-02 08:30:37,229] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:37,230] (heat-config) [DEBUG] [2018-10-02 08:30:37,221] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756\n[2018-10-02 08:30:37,221] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:30:37,222] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-trdtsryyz37p-0-5bmhuuygu7de-NovaComputeUpgradeInitDeployment-irfibzzdyytj/b3c525f8-ec2f-4d01-bb51-462a46e1a327\n[2018-10-02 08:30:37,222] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:30:37,222] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:30:37,222] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0989ab13-9875-4089-9fdb-e5089b74a637\n[2018-10-02 08:30:37,226] (heat-config) [INFO] \n[2018-10-02 08:30:37,226] (heat-config) [DEBUG] \n[2018-10-02 08:30:37,226] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0989ab13-9875-4089-9fdb-e5089b74a637\n\n[2018-10-02 08:30:37,230] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:30:37,230] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0989ab13-9875-4089-9fdb-e5089b74a637.json < /var/lib/heat-config/deployed/0989ab13-9875-4089-9fdb-e5089b74a637.notify.json\n[2018-10-02 08:30:37,653] (heat-config) [INFO] \n[2018-10-02 08:30:37,653] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:37,199] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0989ab13-9875-4089-9fdb-e5089b74a637.json", "[2018-10-02 08:30:37,229] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:37,230] (heat-config) [DEBUG] [2018-10-02 08:30:37,221] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", "[2018-10-02 08:30:37,221] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:30:37,222] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-trdtsryyz37p-0-5bmhuuygu7de-NovaComputeUpgradeInitDeployment-irfibzzdyytj/b3c525f8-ec2f-4d01-bb51-462a46e1a327", "[2018-10-02 08:30:37,222] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:30:37,222] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:30:37,222] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0989ab13-9875-4089-9fdb-e5089b74a637", "[2018-10-02 08:30:37,226] (heat-config) [INFO] ", "[2018-10-02 08:30:37,226] (heat-config) [DEBUG] ", "[2018-10-02 08:30:37,226] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0989ab13-9875-4089-9fdb-e5089b74a637", "", "[2018-10-02 08:30:37,230] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:30:37,230] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0989ab13-9875-4089-9fdb-e5089b74a637.json < /var/lib/heat-config/deployed/0989ab13-9875-4089-9fdb-e5089b74a637.notify.json", "[2018-10-02 08:30:37,653] (heat-config) [INFO] ", "[2018-10-02 08:30:37,653] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:37,705 p=1004 u=mistral | TASK [Output for NovaComputeUpgradeInitDeployment] ***************************** >2018-10-02 08:30:37,705 p=1004 u=mistral | Tuesday 02 October 2018 08:30:37 -0400 (0:00:00.720) 0:01:50.438 ******* >2018-10-02 08:30:37,828 p=1004 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:37,199] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0989ab13-9875-4089-9fdb-e5089b74a637.json", > "[2018-10-02 08:30:37,229] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:37,230] (heat-config) [DEBUG] [2018-10-02 08:30:37,221] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", > "[2018-10-02 08:30:37,221] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:30:37,222] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-trdtsryyz37p-0-5bmhuuygu7de-NovaComputeUpgradeInitDeployment-irfibzzdyytj/b3c525f8-ec2f-4d01-bb51-462a46e1a327", > "[2018-10-02 08:30:37,222] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:30:37,222] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:30:37,222] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0989ab13-9875-4089-9fdb-e5089b74a637", > "[2018-10-02 08:30:37,226] (heat-config) [INFO] ", > "[2018-10-02 08:30:37,226] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:37,226] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0989ab13-9875-4089-9fdb-e5089b74a637", > "", > "[2018-10-02 08:30:37,230] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:30:37,230] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0989ab13-9875-4089-9fdb-e5089b74a637.json < /var/lib/heat-config/deployed/0989ab13-9875-4089-9fdb-e5089b74a637.notify.json", > "[2018-10-02 08:30:37,653] (heat-config) [INFO] ", > "[2018-10-02 08:30:37,653] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:37,851 p=1004 u=mistral | TASK [Check-mode for Run deployment NovaComputeUpgradeInitDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:37,851 p=1004 u=mistral | Tuesday 02 October 2018 08:30:37 -0400 (0:00:00.146) 0:01:50.585 ******* >2018-10-02 08:30:37,867 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:37,887 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:37,888 p=1004 u=mistral | Tuesday 02 October 2018 08:30:37 -0400 (0:00:00.036) 0:01:50.621 ******* >2018-10-02 08:30:38,104 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "c47fef7d-602c-4bd9-bdce-4532e7ed619b"}, "changed": false} >2018-10-02 08:30:38,125 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:38,125 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.237) 0:01:50.859 ******* >2018-10-02 08:30:38,347 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 08:30:38,369 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:38,369 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.244) 0:01:51.103 ******* >2018-10-02 08:30:38,388 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:38,409 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:38,410 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.040) 0:01:51.143 ******* >2018-10-02 08:30:38,428 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:38,449 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:38,449 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.039) 0:01:51.182 ******* >2018-10-02 08:30:38,471 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:38,496 p=1004 u=mistral | TASK [Render deployment file for NovaComputeDeployment for check-mode] ********* >2018-10-02 08:30:38,496 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.047) 0:01:51.230 ******* >2018-10-02 08:30:38,588 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:38,652 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:38,652 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.155) 0:01:51.386 ******* >2018-10-02 08:30:38,675 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:38,693 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:38,693 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.041) 0:01:51.427 ******* >2018-10-02 08:30:38,715 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:38,736 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:38,737 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.043) 0:01:51.470 ******* >2018-10-02 08:30:38,758 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:38,778 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:38,778 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.041) 0:01:51.511 ******* >2018-10-02 08:30:38,798 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:38,818 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:38,818 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.040) 0:01:51.552 ******* >2018-10-02 08:30:38,836 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:38,855 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:38,856 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.037) 0:01:51.589 ******* >2018-10-02 08:30:38,873 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:38,894 p=1004 u=mistral | TASK [Render deployment file for NovaComputeDeployment] ************************ >2018-10-02 08:30:38,894 p=1004 u=mistral | Tuesday 02 October 2018 08:30:38 -0400 (0:00:00.038) 0:01:51.628 ******* >2018-10-02 08:30:39,529 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "0dee1f908274a4425eed7cfb74e9da5bca7b01f3", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeDeployment-c47fef7d-602c-4bd9-bdce-4532e7ed619b", "gid": 0, "group": "root", "md5sum": "eaa318b735568e0914b22b7088a7278c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 22258, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483439.05-236984494973106/source", "state": "file", "uid": 0} >2018-10-02 08:30:39,552 p=1004 u=mistral | TASK [Check if deployed file exists for NovaComputeDeployment] ***************** >2018-10-02 08:30:39,552 p=1004 u=mistral | Tuesday 02 October 2018 08:30:39 -0400 (0:00:00.657) 0:01:52.285 ******* >2018-10-02 08:30:39,752 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:39,776 p=1004 u=mistral | TASK [Check previous deployment rc for NovaComputeDeployment] ****************** >2018-10-02 08:30:39,776 p=1004 u=mistral | Tuesday 02 October 2018 08:30:39 -0400 (0:00:00.224) 0:01:52.510 ******* >2018-10-02 08:30:39,796 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:39,820 p=1004 u=mistral | TASK [Remove deployed file for NovaComputeDeployment when previous deployment failed] *** >2018-10-02 08:30:39,820 p=1004 u=mistral | Tuesday 02 October 2018 08:30:39 -0400 (0:00:00.043) 0:01:52.553 ******* >2018-10-02 08:30:39,840 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:39,863 p=1004 u=mistral | TASK [Force remove deployed file for NovaComputeDeployment] ******************** >2018-10-02 08:30:39,863 p=1004 u=mistral | Tuesday 02 October 2018 08:30:39 -0400 (0:00:00.043) 0:01:52.596 ******* >2018-10-02 08:30:39,881 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:39,905 p=1004 u=mistral | TASK [Run deployment NovaComputeDeployment] ************************************ >2018-10-02 08:30:39,905 p=1004 u=mistral | Tuesday 02 October 2018 08:30:39 -0400 (0:00:00.041) 0:01:52.638 ******* >2018-10-02 08:30:40,698 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/c47fef7d-602c-4bd9-bdce-4532e7ed619b.notify.json)", "delta": "0:00:00.587467", "end": "2018-10-02 08:30:40.676713", "rc": 0, "start": "2018-10-02 08:30:40.089246", "stderr": "[2018-10-02 08:30:40,119] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c47fef7d-602c-4bd9-bdce-4532e7ed619b.json\n[2018-10-02 08:30:40,253] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:40,253] (heat-config) [DEBUG] \n[2018-10-02 08:30:40,254] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 08:30:40,254] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c47fef7d-602c-4bd9-bdce-4532e7ed619b.json < /var/lib/heat-config/deployed/c47fef7d-602c-4bd9-bdce-4532e7ed619b.notify.json\n[2018-10-02 08:30:40,670] (heat-config) [INFO] \n[2018-10-02 08:30:40,670] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:40,119] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c47fef7d-602c-4bd9-bdce-4532e7ed619b.json", "[2018-10-02 08:30:40,253] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:40,253] (heat-config) [DEBUG] ", "[2018-10-02 08:30:40,254] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 08:30:40,254] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c47fef7d-602c-4bd9-bdce-4532e7ed619b.json < /var/lib/heat-config/deployed/c47fef7d-602c-4bd9-bdce-4532e7ed619b.notify.json", "[2018-10-02 08:30:40,670] (heat-config) [INFO] ", "[2018-10-02 08:30:40,670] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:40,721 p=1004 u=mistral | TASK [Output for NovaComputeDeployment] **************************************** >2018-10-02 08:30:40,721 p=1004 u=mistral | Tuesday 02 October 2018 08:30:40 -0400 (0:00:00.816) 0:01:53.455 ******* >2018-10-02 08:30:40,773 p=1004 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:40,119] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c47fef7d-602c-4bd9-bdce-4532e7ed619b.json", > "[2018-10-02 08:30:40,253] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:40,253] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:40,254] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 08:30:40,254] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c47fef7d-602c-4bd9-bdce-4532e7ed619b.json < /var/lib/heat-config/deployed/c47fef7d-602c-4bd9-bdce-4532e7ed619b.notify.json", > "[2018-10-02 08:30:40,670] (heat-config) [INFO] ", > "[2018-10-02 08:30:40,670] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:40,797 p=1004 u=mistral | TASK [Check-mode for Run deployment NovaComputeDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:40,797 p=1004 u=mistral | Tuesday 02 October 2018 08:30:40 -0400 (0:00:00.075) 0:01:53.530 ******* >2018-10-02 08:30:40,813 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:40,833 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:40,834 p=1004 u=mistral | Tuesday 02 October 2018 08:30:40 -0400 (0:00:00.036) 0:01:53.567 ******* >2018-10-02 08:30:40,896 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "6532841f-6499-4f73-b35f-1b6ab3cf4fc0"}, "changed": false} >2018-10-02 08:30:40,915 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:40,915 p=1004 u=mistral | Tuesday 02 October 2018 08:30:40 -0400 (0:00:00.081) 0:01:53.649 ******* >2018-10-02 08:30:40,978 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:30:40,999 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:40,999 p=1004 u=mistral | Tuesday 02 October 2018 08:30:40 -0400 (0:00:00.083) 0:01:53.733 ******* >2018-10-02 08:30:41,017 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:41,036 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:41,037 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.037) 0:01:53.770 ******* >2018-10-02 08:30:41,053 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:41,073 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:41,073 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.036) 0:01:53.806 ******* >2018-10-02 08:30:41,091 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:41,112 p=1004 u=mistral | TASK [Render deployment file for ComputeHostsDeployment for check-mode] ******** >2018-10-02 08:30:41,113 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.039) 0:01:53.846 ******* >2018-10-02 08:30:41,132 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:41,155 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:41,155 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.042) 0:01:53.888 ******* >2018-10-02 08:30:41,174 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:41,193 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:41,194 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.038) 0:01:53.927 ******* >2018-10-02 08:30:41,212 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:41,232 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:41,232 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.038) 0:01:53.965 ******* >2018-10-02 08:30:41,253 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:41,274 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:41,274 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.041) 0:01:54.007 ******* >2018-10-02 08:30:41,295 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:41,315 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:41,315 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.041) 0:01:54.048 ******* >2018-10-02 08:30:41,334 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:41,355 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:41,355 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.039) 0:01:54.088 ******* >2018-10-02 08:30:41,371 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:41,392 p=1004 u=mistral | TASK [Render deployment file for ComputeHostsDeployment] *********************** >2018-10-02 08:30:41,393 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.037) 0:01:54.126 ******* >2018-10-02 08:30:41,930 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "37635a982b4681cf0b59044ff9c6d6c487e40c12", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostsDeployment-6532841f-6499-4f73-b35f-1b6ab3cf4fc0", "gid": 0, "group": "root", "md5sum": "8bc8567840637bcec730bc3324283c52", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4424, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483441.45-85478654998900/source", "state": "file", "uid": 0} >2018-10-02 08:30:41,954 p=1004 u=mistral | TASK [Check if deployed file exists for ComputeHostsDeployment] **************** >2018-10-02 08:30:41,954 p=1004 u=mistral | Tuesday 02 October 2018 08:30:41 -0400 (0:00:00.561) 0:01:54.688 ******* >2018-10-02 08:30:42,153 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:42,174 p=1004 u=mistral | TASK [Check previous deployment rc for ComputeHostsDeployment] ***************** >2018-10-02 08:30:42,174 p=1004 u=mistral | Tuesday 02 October 2018 08:30:42 -0400 (0:00:00.219) 0:01:54.907 ******* >2018-10-02 08:30:42,193 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:42,213 p=1004 u=mistral | TASK [Remove deployed file for ComputeHostsDeployment when previous deployment failed] *** >2018-10-02 08:30:42,213 p=1004 u=mistral | Tuesday 02 October 2018 08:30:42 -0400 (0:00:00.038) 0:01:54.946 ******* >2018-10-02 08:30:42,233 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:42,253 p=1004 u=mistral | TASK [Force remove deployed file for ComputeHostsDeployment] ******************* >2018-10-02 08:30:42,254 p=1004 u=mistral | Tuesday 02 October 2018 08:30:42 -0400 (0:00:00.040) 0:01:54.987 ******* >2018-10-02 08:30:42,272 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:42,292 p=1004 u=mistral | TASK [Run deployment ComputeHostsDeployment] *********************************** >2018-10-02 08:30:42,292 p=1004 u=mistral | Tuesday 02 October 2018 08:30:42 -0400 (0:00:00.038) 0:01:55.025 ******* >2018-10-02 08:30:43,002 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/6532841f-6499-4f73-b35f-1b6ab3cf4fc0.notify.json)", "delta": "0:00:00.486279", "end": "2018-10-02 08:30:42.950600", "rc": 0, "start": "2018-10-02 08:30:42.464321", "stderr": "[2018-10-02 08:30:42,490] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/6532841f-6499-4f73-b35f-1b6ab3cf4fc0.json\n[2018-10-02 08:30:42,543] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:42,543] (heat-config) [DEBUG] [2018-10-02 08:30:42,513] (heat-config) [INFO] hosts=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756\n[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-2nowedjlbxv7-0-lwsnu47wy7yg/7e138deb-ef7b-4088-9a34-229e9238eea9\n[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:30:42,513] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/6532841f-6499-4f73-b35f-1b6ab3cf4fc0\n[2018-10-02 08:30:42,539] (heat-config) [INFO] \n[2018-10-02 08:30:42,539] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /compute-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\n172.17.3.10 overcloud.storage.localdomain\n172.17.4.18 overcloud.storagemgmt.localdomain\n172.17.1.28 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.20 controller-0.localdomain controller-0\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.10 controller-0.management.localdomain controller-0.management\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.12 compute-0.external.localdomain compute-0.external\n192.168.24.12 compute-0.management.localdomain compute-0.management\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.26 ceph-0.localdomain ceph-0\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-10-02 08:30:42,539] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/6532841f-6499-4f73-b35f-1b6ab3cf4fc0\n\n[2018-10-02 08:30:42,543] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:30:42,544] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6532841f-6499-4f73-b35f-1b6ab3cf4fc0.json < /var/lib/heat-config/deployed/6532841f-6499-4f73-b35f-1b6ab3cf4fc0.notify.json\n[2018-10-02 08:30:42,943] (heat-config) [INFO] \n[2018-10-02 08:30:42,943] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:42,490] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/6532841f-6499-4f73-b35f-1b6ab3cf4fc0.json", "[2018-10-02 08:30:42,543] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:42,543] (heat-config) [DEBUG] [2018-10-02 08:30:42,513] (heat-config) [INFO] hosts=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", "[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-2nowedjlbxv7-0-lwsnu47wy7yg/7e138deb-ef7b-4088-9a34-229e9238eea9", "[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:30:42,513] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/6532841f-6499-4f73-b35f-1b6ab3cf4fc0", "[2018-10-02 08:30:42,539] (heat-config) [INFO] ", "[2018-10-02 08:30:42,539] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /compute-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", "172.17.3.10 overcloud.storage.localdomain", "172.17.4.18 overcloud.storagemgmt.localdomain", "172.17.1.28 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.20 controller-0.localdomain controller-0", "172.17.3.15 controller-0.storage.localdomain controller-0.storage", "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.10 controller-0.management.localdomain controller-0.management", "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.11 compute-0.storage.localdomain compute-0.storage", "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.12 compute-0.external.localdomain compute-0.external", "192.168.24.12 compute-0.management.localdomain compute-0.management", "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.26 ceph-0.localdomain ceph-0", "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.8 ceph-0.external.localdomain ceph-0.external", "192.168.24.8 ceph-0.management.localdomain ceph-0.management", "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-10-02 08:30:42,539] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/6532841f-6499-4f73-b35f-1b6ab3cf4fc0", "", "[2018-10-02 08:30:42,543] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:30:42,544] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6532841f-6499-4f73-b35f-1b6ab3cf4fc0.json < /var/lib/heat-config/deployed/6532841f-6499-4f73-b35f-1b6ab3cf4fc0.notify.json", "[2018-10-02 08:30:42,943] (heat-config) [INFO] ", "[2018-10-02 08:30:42,943] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:43,045 p=1004 u=mistral | TASK [Output for ComputeHostsDeployment] *************************************** >2018-10-02 08:30:43,046 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.753) 0:01:55.779 ******* >2018-10-02 08:30:43,127 p=1004 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:42,490] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/6532841f-6499-4f73-b35f-1b6ab3cf4fc0.json", > "[2018-10-02 08:30:42,543] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.16 overcloud.ctlplane.localdomain\\n172.17.3.10 overcloud.storage.localdomain\\n172.17.4.18 overcloud.storagemgmt.localdomain\\n172.17.1.28 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.20 controller-0.localdomain controller-0\\n172.17.3.15 controller-0.storage.localdomain controller-0.storage\\n172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.19 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.10 controller-0.management.localdomain controller-0.management\\n192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.11 compute-0.storage.localdomain compute-0.storage\\n192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.12 compute-0.external.localdomain compute-0.external\\n192.168.24.12 compute-0.management.localdomain compute-0.management\\n192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.26 ceph-0.localdomain ceph-0\\n172.17.3.26 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.8 ceph-0.external.localdomain ceph-0.external\\n192.168.24.8 ceph-0.management.localdomain ceph-0.management\\n192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:42,543] (heat-config) [DEBUG] [2018-10-02 08:30:42,513] (heat-config) [INFO] hosts=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", > "[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-2nowedjlbxv7-0-lwsnu47wy7yg/7e138deb-ef7b-4088-9a34-229e9238eea9", > "[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:30:42,513] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:30:42,513] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/6532841f-6499-4f73-b35f-1b6ab3cf4fc0", > "[2018-10-02 08:30:42,539] (heat-config) [INFO] ", > "[2018-10-02 08:30:42,539] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.16 overcloud.ctlplane.localdomain", > "172.17.3.10 overcloud.storage.localdomain", > "172.17.4.18 overcloud.storagemgmt.localdomain", > "172.17.1.28 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.20 controller-0.localdomain controller-0", > "172.17.3.15 controller-0.storage.localdomain controller-0.storage", > "172.17.4.31 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.20 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.19 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.10 controller-0.management.localdomain controller-0.management", > "192.168.24.10 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.11 compute-0.storage.localdomain compute-0.storage", > "192.168.24.12 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.12 compute-0.external.localdomain compute-0.external", > "192.168.24.12 compute-0.management.localdomain compute-0.management", > "192.168.24.12 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.26 ceph-0.localdomain ceph-0", > "172.17.3.26 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.17 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.8 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.8 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.8 ceph-0.external.localdomain ceph-0.external", > "192.168.24.8 ceph-0.management.localdomain ceph-0.management", > "192.168.24.8 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-10-02 08:30:42,539] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/6532841f-6499-4f73-b35f-1b6ab3cf4fc0", > "", > "[2018-10-02 08:30:42,543] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:30:42,544] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6532841f-6499-4f73-b35f-1b6ab3cf4fc0.json < /var/lib/heat-config/deployed/6532841f-6499-4f73-b35f-1b6ab3cf4fc0.notify.json", > "[2018-10-02 08:30:42,943] (heat-config) [INFO] ", > "[2018-10-02 08:30:42,943] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:43,167 p=1004 u=mistral | TASK [Check-mode for Run deployment ComputeHostsDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:43,167 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.121) 0:01:55.901 ******* >2018-10-02 08:30:43,184 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:43,205 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:43,205 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.037) 0:01:55.938 ******* >2018-10-02 08:30:43,367 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "c845e7d2-1402-435d-8867-931911316427"}, "changed": false} >2018-10-02 08:30:43,386 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:43,386 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.181) 0:01:56.120 ******* >2018-10-02 08:30:43,535 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 08:30:43,554 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:43,554 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.168) 0:01:56.288 ******* >2018-10-02 08:30:43,575 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:43,594 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:43,595 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.040) 0:01:56.328 ******* >2018-10-02 08:30:43,613 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:43,633 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:43,633 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.038) 0:01:56.366 ******* >2018-10-02 08:30:43,651 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:43,670 p=1004 u=mistral | TASK [Render deployment file for ComputeAllNodesDeployment for check-mode] ***** >2018-10-02 08:30:43,671 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.037) 0:01:56.404 ******* >2018-10-02 08:30:43,689 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:43,707 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:43,707 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.036) 0:01:56.440 ******* >2018-10-02 08:30:43,725 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:43,743 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:43,744 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.036) 0:01:56.477 ******* >2018-10-02 08:30:43,762 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:43,780 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:43,780 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.036) 0:01:56.514 ******* >2018-10-02 08:30:43,801 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:43,819 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:43,819 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.039) 0:01:56.553 ******* >2018-10-02 08:30:43,839 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:43,857 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:43,858 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.038) 0:01:56.591 ******* >2018-10-02 08:30:43,873 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:43,892 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:43,892 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.034) 0:01:56.625 ******* >2018-10-02 08:30:43,910 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:43,931 p=1004 u=mistral | TASK [Render deployment file for ComputeAllNodesDeployment] ******************** >2018-10-02 08:30:43,931 p=1004 u=mistral | Tuesday 02 October 2018 08:30:43 -0400 (0:00:00.039) 0:01:56.665 ******* >2018-10-02 08:30:44,577 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f5060aece58658f038580ad6dbe09a5fd48cfdc0", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesDeployment-c845e7d2-1402-435d-8867-931911316427", "gid": 0, "group": "root", "md5sum": "854bb308aea934f1312d26d2475d6cc2", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19537, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483444.1-157834738617207/source", "state": "file", "uid": 0} >2018-10-02 08:30:44,598 p=1004 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesDeployment] ************* >2018-10-02 08:30:44,598 p=1004 u=mistral | Tuesday 02 October 2018 08:30:44 -0400 (0:00:00.667) 0:01:57.332 ******* >2018-10-02 08:30:44,803 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:44,826 p=1004 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesDeployment] ************** >2018-10-02 08:30:44,826 p=1004 u=mistral | Tuesday 02 October 2018 08:30:44 -0400 (0:00:00.227) 0:01:57.559 ******* >2018-10-02 08:30:44,847 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:44,868 p=1004 u=mistral | TASK [Remove deployed file for ComputeAllNodesDeployment when previous deployment failed] *** >2018-10-02 08:30:44,868 p=1004 u=mistral | Tuesday 02 October 2018 08:30:44 -0400 (0:00:00.042) 0:01:57.602 ******* >2018-10-02 08:30:44,888 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:44,909 p=1004 u=mistral | TASK [Force remove deployed file for ComputeAllNodesDeployment] **************** >2018-10-02 08:30:44,909 p=1004 u=mistral | Tuesday 02 October 2018 08:30:44 -0400 (0:00:00.040) 0:01:57.642 ******* >2018-10-02 08:30:44,930 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:44,951 p=1004 u=mistral | TASK [Run deployment ComputeAllNodesDeployment] ******************************** >2018-10-02 08:30:44,952 p=1004 u=mistral | Tuesday 02 October 2018 08:30:44 -0400 (0:00:00.042) 0:01:57.685 ******* >2018-10-02 08:30:45,726 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/c845e7d2-1402-435d-8867-931911316427.notify.json)", "delta": "0:00:00.568779", "end": "2018-10-02 08:30:45.702897", "rc": 0, "start": "2018-10-02 08:30:45.134118", "stderr": "[2018-10-02 08:30:45,162] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c845e7d2-1402-435d-8867-931911316427.json\n[2018-10-02 08:30:45,286] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:45,286] (heat-config) [DEBUG] \n[2018-10-02 08:30:45,286] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 08:30:45,287] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c845e7d2-1402-435d-8867-931911316427.json < /var/lib/heat-config/deployed/c845e7d2-1402-435d-8867-931911316427.notify.json\n[2018-10-02 08:30:45,695] (heat-config) [INFO] \n[2018-10-02 08:30:45,695] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:45,162] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c845e7d2-1402-435d-8867-931911316427.json", "[2018-10-02 08:30:45,286] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:45,286] (heat-config) [DEBUG] ", "[2018-10-02 08:30:45,286] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 08:30:45,287] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c845e7d2-1402-435d-8867-931911316427.json < /var/lib/heat-config/deployed/c845e7d2-1402-435d-8867-931911316427.notify.json", "[2018-10-02 08:30:45,695] (heat-config) [INFO] ", "[2018-10-02 08:30:45,695] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:45,751 p=1004 u=mistral | TASK [Output for ComputeAllNodesDeployment] ************************************ >2018-10-02 08:30:45,751 p=1004 u=mistral | Tuesday 02 October 2018 08:30:45 -0400 (0:00:00.799) 0:01:58.485 ******* >2018-10-02 08:30:45,810 p=1004 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:45,162] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c845e7d2-1402-435d-8867-931911316427.json", > "[2018-10-02 08:30:45,286] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:45,286] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:45,286] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 08:30:45,287] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c845e7d2-1402-435d-8867-931911316427.json < /var/lib/heat-config/deployed/c845e7d2-1402-435d-8867-931911316427.notify.json", > "[2018-10-02 08:30:45,695] (heat-config) [INFO] ", > "[2018-10-02 08:30:45,695] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:45,834 p=1004 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:45,835 p=1004 u=mistral | Tuesday 02 October 2018 08:30:45 -0400 (0:00:00.083) 0:01:58.568 ******* >2018-10-02 08:30:45,850 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:45,871 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:45,871 p=1004 u=mistral | Tuesday 02 October 2018 08:30:45 -0400 (0:00:00.035) 0:01:58.604 ******* >2018-10-02 08:30:45,939 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "80ee9338-c891-44a1-8ece-74958241b52a"}, "changed": false} >2018-10-02 08:30:45,957 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:45,957 p=1004 u=mistral | Tuesday 02 October 2018 08:30:45 -0400 (0:00:00.086) 0:01:58.691 ******* >2018-10-02 08:30:46,017 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:30:46,042 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:46,042 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.084) 0:01:58.775 ******* >2018-10-02 08:30:46,064 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:46,086 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:46,086 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.044) 0:01:58.819 ******* >2018-10-02 08:30:46,105 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:46,124 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:46,124 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.038) 0:01:58.858 ******* >2018-10-02 08:30:46,140 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:46,161 p=1004 u=mistral | TASK [Render deployment file for ComputeAllNodesValidationDeployment for check-mode] *** >2018-10-02 08:30:46,161 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.036) 0:01:58.894 ******* >2018-10-02 08:30:46,180 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:46,199 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:46,199 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.037) 0:01:58.932 ******* >2018-10-02 08:30:46,216 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:46,235 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:46,236 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.036) 0:01:58.969 ******* >2018-10-02 08:30:46,252 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:46,271 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:46,271 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.035) 0:01:59.004 ******* >2018-10-02 08:30:46,293 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:46,311 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:46,311 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.040) 0:01:59.044 ******* >2018-10-02 08:30:46,330 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:46,348 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:46,348 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.036) 0:01:59.081 ******* >2018-10-02 08:30:46,364 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:46,382 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:46,382 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.034) 0:01:59.116 ******* >2018-10-02 08:30:46,397 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:46,416 p=1004 u=mistral | TASK [Render deployment file for ComputeAllNodesValidationDeployment] ********** >2018-10-02 08:30:46,417 p=1004 u=mistral | Tuesday 02 October 2018 08:30:46 -0400 (0:00:00.034) 0:01:59.150 ******* >2018-10-02 08:30:47,027 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "37410716b37ebd9d46ddb0619dedd3cd4a89b16d", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesValidationDeployment-80ee9338-c891-44a1-8ece-74958241b52a", "gid": 0, "group": "root", "md5sum": "82f40e23c8031c60434c8de65cba7d48", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4935, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483446.55-281084051621219/source", "state": "file", "uid": 0} >2018-10-02 08:30:47,048 p=1004 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesValidationDeployment] *** >2018-10-02 08:30:47,048 p=1004 u=mistral | Tuesday 02 October 2018 08:30:47 -0400 (0:00:00.631) 0:01:59.782 ******* >2018-10-02 08:30:47,310 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:47,332 p=1004 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesValidationDeployment] **** >2018-10-02 08:30:47,333 p=1004 u=mistral | Tuesday 02 October 2018 08:30:47 -0400 (0:00:00.284) 0:02:00.066 ******* >2018-10-02 08:30:47,351 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:47,373 p=1004 u=mistral | TASK [Remove deployed file for ComputeAllNodesValidationDeployment when previous deployment failed] *** >2018-10-02 08:30:47,374 p=1004 u=mistral | Tuesday 02 October 2018 08:30:47 -0400 (0:00:00.040) 0:02:00.107 ******* >2018-10-02 08:30:47,393 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:47,413 p=1004 u=mistral | TASK [Force remove deployed file for ComputeAllNodesValidationDeployment] ****** >2018-10-02 08:30:47,413 p=1004 u=mistral | Tuesday 02 October 2018 08:30:47 -0400 (0:00:00.039) 0:02:00.146 ******* >2018-10-02 08:30:47,430 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:47,450 p=1004 u=mistral | TASK [Run deployment ComputeAllNodesValidationDeployment] ********************** >2018-10-02 08:30:47,450 p=1004 u=mistral | Tuesday 02 October 2018 08:30:47 -0400 (0:00:00.037) 0:02:00.183 ******* >2018-10-02 08:30:48,736 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/80ee9338-c891-44a1-8ece-74958241b52a.notify.json)", "delta": "0:00:01.016408", "end": "2018-10-02 08:30:48.715070", "rc": 0, "start": "2018-10-02 08:30:47.698662", "stderr": "[2018-10-02 08:30:47,725] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/80ee9338-c891-44a1-8ece-74958241b52a.json\n[2018-10-02 08:30:48,317] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.20 for local network 172.17.1.0/24.\\nPing to 172.17.1.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.19 for local network 172.17.2.0/24.\\nPing to 172.17.2.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\\nPing to 172.17.3.15 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\\nPing to 192.168.24.10 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:48,317] (heat-config) [DEBUG] [2018-10-02 08:30:47,749] (heat-config) [INFO] ping_test_ips=172.17.3.15 172.17.4.31 172.17.1.20 172.17.2.19 10.0.0.104 192.168.24.10\n[2018-10-02 08:30:47,749] (heat-config) [INFO] validate_fqdn=False\n[2018-10-02 08:30:47,749] (heat-config) [INFO] validate_ntp=True\n[2018-10-02 08:30:47,749] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756\n[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-6v7ufqzl3bdz-0-qgojppr5q6bi/22eaa473-c668-45f8-9b70-cdb2b7ca8bd6\n[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:30:47,750] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/80ee9338-c891-44a1-8ece-74958241b52a\n[2018-10-02 08:30:48,313] (heat-config) [INFO] Trying to ping 172.17.1.20 for local network 172.17.1.0/24.\nPing to 172.17.1.20 succeeded.\nSUCCESS\nTrying to ping 172.17.2.19 for local network 172.17.2.0/24.\nPing to 172.17.2.19 succeeded.\nSUCCESS\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\nPing to 172.17.3.15 succeeded.\nSUCCESS\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\nPing to 192.168.24.10 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nSUCCESS\n\n[2018-10-02 08:30:48,313] (heat-config) [DEBUG] \n[2018-10-02 08:30:48,313] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/80ee9338-c891-44a1-8ece-74958241b52a\n\n[2018-10-02 08:30:48,317] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:30:48,318] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/80ee9338-c891-44a1-8ece-74958241b52a.json < /var/lib/heat-config/deployed/80ee9338-c891-44a1-8ece-74958241b52a.notify.json\n[2018-10-02 08:30:48,709] (heat-config) [INFO] \n[2018-10-02 08:30:48,709] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:47,725] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/80ee9338-c891-44a1-8ece-74958241b52a.json", "[2018-10-02 08:30:48,317] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.20 for local network 172.17.1.0/24.\\nPing to 172.17.1.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.19 for local network 172.17.2.0/24.\\nPing to 172.17.2.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\\nPing to 172.17.3.15 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\\nPing to 192.168.24.10 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:48,317] (heat-config) [DEBUG] [2018-10-02 08:30:47,749] (heat-config) [INFO] ping_test_ips=172.17.3.15 172.17.4.31 172.17.1.20 172.17.2.19 10.0.0.104 192.168.24.10", "[2018-10-02 08:30:47,749] (heat-config) [INFO] validate_fqdn=False", "[2018-10-02 08:30:47,749] (heat-config) [INFO] validate_ntp=True", "[2018-10-02 08:30:47,749] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", "[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-6v7ufqzl3bdz-0-qgojppr5q6bi/22eaa473-c668-45f8-9b70-cdb2b7ca8bd6", "[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:30:47,750] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/80ee9338-c891-44a1-8ece-74958241b52a", "[2018-10-02 08:30:48,313] (heat-config) [INFO] Trying to ping 172.17.1.20 for local network 172.17.1.0/24.", "Ping to 172.17.1.20 succeeded.", "SUCCESS", "Trying to ping 172.17.2.19 for local network 172.17.2.0/24.", "Ping to 172.17.2.19 succeeded.", "SUCCESS", "Trying to ping 172.17.3.15 for local network 172.17.3.0/24.", "Ping to 172.17.3.15 succeeded.", "SUCCESS", "Trying to ping 192.168.24.10 for local network 192.168.24.0/24.", "Ping to 192.168.24.10 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "SUCCESS", "", "[2018-10-02 08:30:48,313] (heat-config) [DEBUG] ", "[2018-10-02 08:30:48,313] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/80ee9338-c891-44a1-8ece-74958241b52a", "", "[2018-10-02 08:30:48,317] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:30:48,318] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/80ee9338-c891-44a1-8ece-74958241b52a.json < /var/lib/heat-config/deployed/80ee9338-c891-44a1-8ece-74958241b52a.notify.json", "[2018-10-02 08:30:48,709] (heat-config) [INFO] ", "[2018-10-02 08:30:48,709] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:48,758 p=1004 u=mistral | TASK [Output for ComputeAllNodesValidationDeployment] ************************** >2018-10-02 08:30:48,758 p=1004 u=mistral | Tuesday 02 October 2018 08:30:48 -0400 (0:00:01.308) 0:02:01.492 ******* >2018-10-02 08:30:48,814 p=1004 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:47,725] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/80ee9338-c891-44a1-8ece-74958241b52a.json", > "[2018-10-02 08:30:48,317] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.20 for local network 172.17.1.0/24.\\nPing to 172.17.1.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.19 for local network 172.17.2.0/24.\\nPing to 172.17.2.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.15 for local network 172.17.3.0/24.\\nPing to 172.17.3.15 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.10 for local network 192.168.24.0/24.\\nPing to 192.168.24.10 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:48,317] (heat-config) [DEBUG] [2018-10-02 08:30:47,749] (heat-config) [INFO] ping_test_ips=172.17.3.15 172.17.4.31 172.17.1.20 172.17.2.19 10.0.0.104 192.168.24.10", > "[2018-10-02 08:30:47,749] (heat-config) [INFO] validate_fqdn=False", > "[2018-10-02 08:30:47,749] (heat-config) [INFO] validate_ntp=True", > "[2018-10-02 08:30:47,749] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", > "[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-6v7ufqzl3bdz-0-qgojppr5q6bi/22eaa473-c668-45f8-9b70-cdb2b7ca8bd6", > "[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:30:47,750] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:30:47,750] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/80ee9338-c891-44a1-8ece-74958241b52a", > "[2018-10-02 08:30:48,313] (heat-config) [INFO] Trying to ping 172.17.1.20 for local network 172.17.1.0/24.", > "Ping to 172.17.1.20 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.19 for local network 172.17.2.0/24.", > "Ping to 172.17.2.19 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.15 for local network 172.17.3.0/24.", > "Ping to 172.17.3.15 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.10 for local network 192.168.24.0/24.", > "Ping to 192.168.24.10 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "SUCCESS", > "", > "[2018-10-02 08:30:48,313] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:48,313] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/80ee9338-c891-44a1-8ece-74958241b52a", > "", > "[2018-10-02 08:30:48,317] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:30:48,318] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/80ee9338-c891-44a1-8ece-74958241b52a.json < /var/lib/heat-config/deployed/80ee9338-c891-44a1-8ece-74958241b52a.notify.json", > "[2018-10-02 08:30:48,709] (heat-config) [INFO] ", > "[2018-10-02 08:30:48,709] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:48,838 p=1004 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesValidationDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:48,839 p=1004 u=mistral | Tuesday 02 October 2018 08:30:48 -0400 (0:00:00.080) 0:02:01.572 ******* >2018-10-02 08:30:48,853 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:48,872 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:48,872 p=1004 u=mistral | Tuesday 02 October 2018 08:30:48 -0400 (0:00:00.033) 0:02:01.606 ******* >2018-10-02 08:30:48,943 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "3b3ce860-a913-48f2-b076-8ceabc7753a2"}, "changed": false} >2018-10-02 08:30:48,963 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:48,963 p=1004 u=mistral | Tuesday 02 October 2018 08:30:48 -0400 (0:00:00.090) 0:02:01.696 ******* >2018-10-02 08:30:49,034 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "ansible"}, "changed": false} >2018-10-02 08:30:49,054 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:49,054 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.090) 0:02:01.787 ******* >2018-10-02 08:30:49,072 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:49,094 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:49,094 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.039) 0:02:01.827 ******* >2018-10-02 08:30:49,117 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:49,139 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:49,139 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.045) 0:02:01.872 ******* >2018-10-02 08:30:49,157 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:49,179 p=1004 u=mistral | TASK [Render deployment file for ComputeHostPrepDeployment for check-mode] ***** >2018-10-02 08:30:49,179 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.039) 0:02:01.912 ******* >2018-10-02 08:30:49,198 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:49,219 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:49,219 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.040) 0:02:01.953 ******* >2018-10-02 08:30:49,238 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:49,259 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:49,260 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.040) 0:02:01.993 ******* >2018-10-02 08:30:49,277 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:49,298 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:49,298 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.038) 0:02:02.031 ******* >2018-10-02 08:30:49,319 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:49,340 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:49,340 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.042) 0:02:02.073 ******* >2018-10-02 08:30:49,361 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:49,381 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:49,382 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.041) 0:02:02.115 ******* >2018-10-02 08:30:49,400 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:49,420 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:49,421 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.038) 0:02:02.154 ******* >2018-10-02 08:30:49,439 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:49,461 p=1004 u=mistral | TASK [Render deployment file for ComputeHostPrepDeployment] ******************** >2018-10-02 08:30:49,461 p=1004 u=mistral | Tuesday 02 October 2018 08:30:49 -0400 (0:00:00.040) 0:02:02.195 ******* >2018-10-02 08:30:50,021 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "444b7904922be26837dc17a7668fa426c2cef773", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostPrepDeployment-3b3ce860-a913-48f2-b076-8ceabc7753a2", "gid": 0, "group": "root", "md5sum": "2e90a38429e9e8729c20f9177f8e8c96", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21372, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483449.54-105044832972703/source", "state": "file", "uid": 0} >2018-10-02 08:30:50,043 p=1004 u=mistral | TASK [Check if deployed file exists for ComputeHostPrepDeployment] ************* >2018-10-02 08:30:50,043 p=1004 u=mistral | Tuesday 02 October 2018 08:30:50 -0400 (0:00:00.581) 0:02:02.776 ******* >2018-10-02 08:30:50,237 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:50,260 p=1004 u=mistral | TASK [Check previous deployment rc for ComputeHostPrepDeployment] ************** >2018-10-02 08:30:50,260 p=1004 u=mistral | Tuesday 02 October 2018 08:30:50 -0400 (0:00:00.217) 0:02:02.994 ******* >2018-10-02 08:30:50,280 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:50,301 p=1004 u=mistral | TASK [Remove deployed file for ComputeHostPrepDeployment when previous deployment failed] *** >2018-10-02 08:30:50,301 p=1004 u=mistral | Tuesday 02 October 2018 08:30:50 -0400 (0:00:00.040) 0:02:03.035 ******* >2018-10-02 08:30:50,323 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:50,345 p=1004 u=mistral | TASK [Force remove deployed file for ComputeHostPrepDeployment] **************** >2018-10-02 08:30:50,345 p=1004 u=mistral | Tuesday 02 October 2018 08:30:50 -0400 (0:00:00.043) 0:02:03.078 ******* >2018-10-02 08:30:50,363 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:50,383 p=1004 u=mistral | TASK [Run deployment ComputeHostPrepDeployment] ******************************** >2018-10-02 08:30:50,383 p=1004 u=mistral | Tuesday 02 October 2018 08:30:50 -0400 (0:00:00.038) 0:02:03.117 ******* >2018-10-02 08:30:56,522 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/3b3ce860-a913-48f2-b076-8ceabc7753a2.notify.json)", "delta": "0:00:05.935713", "end": "2018-10-02 08:30:56.497967", "rc": 0, "start": "2018-10-02 08:30:50.562254", "stderr": "[2018-10-02 08:30:50,590] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/3b3ce860-a913-48f2-b076-8ceabc7753a2.json\n[2018-10-02 08:30:56,125] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:56,125] (heat-config) [DEBUG] [2018-10-02 08:30:50,613] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/3b3ce860-a913-48f2-b076-8ceabc7753a2_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/3b3ce860-a913-48f2-b076-8ceabc7753a2_variables.json\n[2018-10-02 08:30:56,121] (heat-config) [INFO] Return code 0\n[2018-10-02 08:30:56,121] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-10-02 08:30:56,121] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/3b3ce860-a913-48f2-b076-8ceabc7753a2_playbook.yaml\n\n[2018-10-02 08:30:56,125] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-10-02 08:30:56,125] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3b3ce860-a913-48f2-b076-8ceabc7753a2.json < /var/lib/heat-config/deployed/3b3ce860-a913-48f2-b076-8ceabc7753a2.notify.json\n[2018-10-02 08:30:56,491] (heat-config) [INFO] \n[2018-10-02 08:30:56,491] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:50,590] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/3b3ce860-a913-48f2-b076-8ceabc7753a2.json", "[2018-10-02 08:30:56,125] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:56,125] (heat-config) [DEBUG] [2018-10-02 08:30:50,613] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/3b3ce860-a913-48f2-b076-8ceabc7753a2_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/3b3ce860-a913-48f2-b076-8ceabc7753a2_variables.json", "[2018-10-02 08:30:56,121] (heat-config) [INFO] Return code 0", "[2018-10-02 08:30:56,121] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-10-02 08:30:56,121] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/3b3ce860-a913-48f2-b076-8ceabc7753a2_playbook.yaml", "", "[2018-10-02 08:30:56,125] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-10-02 08:30:56,125] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3b3ce860-a913-48f2-b076-8ceabc7753a2.json < /var/lib/heat-config/deployed/3b3ce860-a913-48f2-b076-8ceabc7753a2.notify.json", "[2018-10-02 08:30:56,491] (heat-config) [INFO] ", "[2018-10-02 08:30:56,491] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:56,545 p=1004 u=mistral | TASK [Output for ComputeHostPrepDeployment] ************************************ >2018-10-02 08:30:56,545 p=1004 u=mistral | Tuesday 02 October 2018 08:30:56 -0400 (0:00:06.161) 0:02:09.279 ******* >2018-10-02 08:30:56,602 p=1004 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:50,590] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/3b3ce860-a913-48f2-b076-8ceabc7753a2.json", > "[2018-10-02 08:30:56,125] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:56,125] (heat-config) [DEBUG] [2018-10-02 08:30:50,613] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/3b3ce860-a913-48f2-b076-8ceabc7753a2_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/3b3ce860-a913-48f2-b076-8ceabc7753a2_variables.json", > "[2018-10-02 08:30:56,121] (heat-config) [INFO] Return code 0", > "[2018-10-02 08:30:56,121] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-10-02 08:30:56,121] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/3b3ce860-a913-48f2-b076-8ceabc7753a2_playbook.yaml", > "", > "[2018-10-02 08:30:56,125] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-10-02 08:30:56,125] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3b3ce860-a913-48f2-b076-8ceabc7753a2.json < /var/lib/heat-config/deployed/3b3ce860-a913-48f2-b076-8ceabc7753a2.notify.json", > "[2018-10-02 08:30:56,491] (heat-config) [INFO] ", > "[2018-10-02 08:30:56,491] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:56,627 p=1004 u=mistral | TASK [Check-mode for Run deployment ComputeHostPrepDeployment (changed status indicates deployment would run)] *** >2018-10-02 08:30:56,627 p=1004 u=mistral | Tuesday 02 October 2018 08:30:56 -0400 (0:00:00.081) 0:02:09.360 ******* >2018-10-02 08:30:56,645 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:56,667 p=1004 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 08:30:56,667 p=1004 u=mistral | Tuesday 02 October 2018 08:30:56 -0400 (0:00:00.040) 0:02:09.400 ******* >2018-10-02 08:30:56,727 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "a8c6f74a-b1bf-4cb3-965e-5faf232da19c"}, "changed": false} >2018-10-02 08:30:56,748 p=1004 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 08:30:56,749 p=1004 u=mistral | Tuesday 02 October 2018 08:30:56 -0400 (0:00:00.081) 0:02:09.482 ******* >2018-10-02 08:30:56,802 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 08:30:56,822 p=1004 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 08:30:56,822 p=1004 u=mistral | Tuesday 02 October 2018 08:30:56 -0400 (0:00:00.073) 0:02:09.556 ******* >2018-10-02 08:30:56,839 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:56,860 p=1004 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 08:30:56,860 p=1004 u=mistral | Tuesday 02 October 2018 08:30:56 -0400 (0:00:00.037) 0:02:09.593 ******* >2018-10-02 08:30:56,879 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:56,899 p=1004 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 08:30:56,899 p=1004 u=mistral | Tuesday 02 October 2018 08:30:56 -0400 (0:00:00.039) 0:02:09.633 ******* >2018-10-02 08:30:56,921 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:56,944 p=1004 u=mistral | TASK [Render deployment file for ComputeArtifactsDeploy for check-mode] ******** >2018-10-02 08:30:56,944 p=1004 u=mistral | Tuesday 02 October 2018 08:30:56 -0400 (0:00:00.044) 0:02:09.678 ******* >2018-10-02 08:30:56,962 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:56,981 p=1004 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 08:30:56,981 p=1004 u=mistral | Tuesday 02 October 2018 08:30:56 -0400 (0:00:00.036) 0:02:09.714 ******* >2018-10-02 08:30:56,999 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:57,021 p=1004 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 08:30:57,021 p=1004 u=mistral | Tuesday 02 October 2018 08:30:57 -0400 (0:00:00.039) 0:02:09.754 ******* >2018-10-02 08:30:57,038 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:57,057 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:57,058 p=1004 u=mistral | Tuesday 02 October 2018 08:30:57 -0400 (0:00:00.036) 0:02:09.791 ******* >2018-10-02 08:30:57,078 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:57,099 p=1004 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 08:30:57,099 p=1004 u=mistral | Tuesday 02 October 2018 08:30:57 -0400 (0:00:00.041) 0:02:09.832 ******* >2018-10-02 08:30:57,120 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:57,139 p=1004 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 08:30:57,140 p=1004 u=mistral | Tuesday 02 October 2018 08:30:57 -0400 (0:00:00.040) 0:02:09.873 ******* >2018-10-02 08:30:57,157 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:57,176 p=1004 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 08:30:57,177 p=1004 u=mistral | Tuesday 02 October 2018 08:30:57 -0400 (0:00:00.036) 0:02:09.910 ******* >2018-10-02 08:30:57,192 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:30:57,213 p=1004 u=mistral | TASK [Render deployment file for ComputeArtifactsDeploy] *********************** >2018-10-02 08:30:57,213 p=1004 u=mistral | Tuesday 02 October 2018 08:30:57 -0400 (0:00:00.036) 0:02:09.946 ******* >2018-10-02 08:30:57,764 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "2326da70e7a3e9f114dfccbba7e5553778050113", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeArtifactsDeploy-a8c6f74a-b1bf-4cb3-965e-5faf232da19c", "gid": 0, "group": "root", "md5sum": "3777832a750dd28643c0744319fcf0c1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2015, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483457.27-33474796693722/source", "state": "file", "uid": 0} >2018-10-02 08:30:57,786 p=1004 u=mistral | TASK [Check if deployed file exists for ComputeArtifactsDeploy] **************** >2018-10-02 08:30:57,787 p=1004 u=mistral | Tuesday 02 October 2018 08:30:57 -0400 (0:00:00.573) 0:02:10.520 ******* >2018-10-02 08:30:57,989 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:30:58,013 p=1004 u=mistral | TASK [Check previous deployment rc for ComputeArtifactsDeploy] ***************** >2018-10-02 08:30:58,013 p=1004 u=mistral | Tuesday 02 October 2018 08:30:58 -0400 (0:00:00.226) 0:02:10.746 ******* >2018-10-02 08:30:58,035 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:58,058 p=1004 u=mistral | TASK [Remove deployed file for ComputeArtifactsDeploy when previous deployment failed] *** >2018-10-02 08:30:58,058 p=1004 u=mistral | Tuesday 02 October 2018 08:30:58 -0400 (0:00:00.045) 0:02:10.792 ******* >2018-10-02 08:30:58,079 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:58,101 p=1004 u=mistral | TASK [Force remove deployed file for ComputeArtifactsDeploy] ******************* >2018-10-02 08:30:58,101 p=1004 u=mistral | Tuesday 02 October 2018 08:30:58 -0400 (0:00:00.042) 0:02:10.835 ******* >2018-10-02 08:30:58,119 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:58,140 p=1004 u=mistral | TASK [Run deployment ComputeArtifactsDeploy] *********************************** >2018-10-02 08:30:58,140 p=1004 u=mistral | Tuesday 02 October 2018 08:30:58 -0400 (0:00:00.039) 0:02:10.874 ******* >2018-10-02 08:30:58,845 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a8c6f74a-b1bf-4cb3-965e-5faf232da19c.notify.json)", "delta": "0:00:00.498134", "end": "2018-10-02 08:30:58.824200", "rc": 0, "start": "2018-10-02 08:30:58.326066", "stderr": "[2018-10-02 08:30:58,354] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a8c6f74a-b1bf-4cb3-965e-5faf232da19c.json\n[2018-10-02 08:30:58,391] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 08:30:58,391] (heat-config) [DEBUG] [2018-10-02 08:30:58,381] (heat-config) [INFO] artifact_urls=\n[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756\n[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-neoosyt67g2y-ComputeArtifactsDeploy-wtkmuwsl3dgl-0-axx7al27jq7z/984ef511-e217-403a-85a8-7e46b04fc051\n[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 08:30:58,382] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a8c6f74a-b1bf-4cb3-965e-5faf232da19c\n[2018-10-02 08:30:58,387] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-10-02 08:30:58,388] (heat-config) [DEBUG] \n[2018-10-02 08:30:58,388] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a8c6f74a-b1bf-4cb3-965e-5faf232da19c\n\n[2018-10-02 08:30:58,391] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 08:30:58,392] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a8c6f74a-b1bf-4cb3-965e-5faf232da19c.json < /var/lib/heat-config/deployed/a8c6f74a-b1bf-4cb3-965e-5faf232da19c.notify.json\n[2018-10-02 08:30:58,817] (heat-config) [INFO] \n[2018-10-02 08:30:58,817] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 08:30:58,354] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a8c6f74a-b1bf-4cb3-965e-5faf232da19c.json", "[2018-10-02 08:30:58,391] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 08:30:58,391] (heat-config) [DEBUG] [2018-10-02 08:30:58,381] (heat-config) [INFO] artifact_urls=", "[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", "[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-neoosyt67g2y-ComputeArtifactsDeploy-wtkmuwsl3dgl-0-axx7al27jq7z/984ef511-e217-403a-85a8-7e46b04fc051", "[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 08:30:58,382] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a8c6f74a-b1bf-4cb3-965e-5faf232da19c", "[2018-10-02 08:30:58,387] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-10-02 08:30:58,388] (heat-config) [DEBUG] ", "[2018-10-02 08:30:58,388] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a8c6f74a-b1bf-4cb3-965e-5faf232da19c", "", "[2018-10-02 08:30:58,391] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 08:30:58,392] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a8c6f74a-b1bf-4cb3-965e-5faf232da19c.json < /var/lib/heat-config/deployed/a8c6f74a-b1bf-4cb3-965e-5faf232da19c.notify.json", "[2018-10-02 08:30:58,817] (heat-config) [INFO] ", "[2018-10-02 08:30:58,817] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 08:30:58,866 p=1004 u=mistral | TASK [Output for ComputeArtifactsDeploy] *************************************** >2018-10-02 08:30:58,866 p=1004 u=mistral | Tuesday 02 October 2018 08:30:58 -0400 (0:00:00.725) 0:02:11.599 ******* >2018-10-02 08:30:58,916 p=1004 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 08:30:58,354] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a8c6f74a-b1bf-4cb3-965e-5faf232da19c.json", > "[2018-10-02 08:30:58,391] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 08:30:58,391] (heat-config) [DEBUG] [2018-10-02 08:30:58,381] (heat-config) [INFO] artifact_urls=", > "[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_server_id=22e53bb9-293e-40e4-a8b0-aa94ddbd3756", > "[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-neoosyt67g2y-ComputeArtifactsDeploy-wtkmuwsl3dgl-0-axx7al27jq7z/984ef511-e217-403a-85a8-7e46b04fc051", > "[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 08:30:58,381] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 08:30:58,382] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a8c6f74a-b1bf-4cb3-965e-5faf232da19c", > "[2018-10-02 08:30:58,387] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-10-02 08:30:58,388] (heat-config) [DEBUG] ", > "[2018-10-02 08:30:58,388] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a8c6f74a-b1bf-4cb3-965e-5faf232da19c", > "", > "[2018-10-02 08:30:58,391] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 08:30:58,392] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a8c6f74a-b1bf-4cb3-965e-5faf232da19c.json < /var/lib/heat-config/deployed/a8c6f74a-b1bf-4cb3-965e-5faf232da19c.notify.json", > "[2018-10-02 08:30:58,817] (heat-config) [INFO] ", > "[2018-10-02 08:30:58,817] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 08:30:58,937 p=1004 u=mistral | TASK [Check-mode for Run deployment ComputeArtifactsDeploy (changed status indicates deployment would run)] *** >2018-10-02 08:30:58,937 p=1004 u=mistral | Tuesday 02 October 2018 08:30:58 -0400 (0:00:00.071) 0:02:11.671 ******* >2018-10-02 08:30:58,951 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:58,958 p=1004 u=mistral | PLAY [Host prep steps] ********************************************************* >2018-10-02 08:30:59,001 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:30:59,002 p=1004 u=mistral | Tuesday 02 October 2018 08:30:59 -0400 (0:00:00.064) 0:02:11.735 ******* >2018-10-02 08:30:59,059 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-10-02 08:30:59,060 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:30:59,087 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-10-02 08:30:59,088 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:30:59,220 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/aodh) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/aodh", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:30:59,393 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/aodh-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/aodh-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:30:59,420 p=1004 u=mistral | TASK [aodh logs readme] ******************************************************** >2018-10-02 08:30:59,421 p=1004 u=mistral | Tuesday 02 October 2018 08:30:59 -0400 (0:00:00.418) 0:02:12.154 ******* >2018-10-02 08:30:59,481 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:59,495 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:30:59,906 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b6cf6dbe054f430c33d39c1a1a88593536d6e659", "msg": "Destination directory /var/log/aodh does not exist"} >2018-10-02 08:30:59,906 p=1004 u=mistral | ...ignoring >2018-10-02 08:30:59,934 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:30:59,934 p=1004 u=mistral | Tuesday 02 October 2018 08:30:59 -0400 (0:00:00.513) 0:02:12.667 ******* >2018-10-02 08:30:59,990 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:00,005 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:00,137 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:00,163 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:00,163 p=1004 u=mistral | Tuesday 02 October 2018 08:31:00 -0400 (0:00:00.229) 0:02:12.896 ******* >2018-10-02 08:31:00,219 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:00,231 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:00,448 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:00,475 p=1004 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-10-02 08:31:00,476 p=1004 u=mistral | Tuesday 02 October 2018 08:31:00 -0400 (0:00:00.312) 0:02:13.209 ******* >2018-10-02 08:31:00,533 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:00,548 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:01,030 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-10-02 08:31:01,031 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:01,058 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:01,058 p=1004 u=mistral | Tuesday 02 October 2018 08:31:01 -0400 (0:00:00.582) 0:02:13.792 ******* >2018-10-02 08:31:01,125 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:01,126 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:01,231 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:01,232 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:01,336 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/cinder) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:01,508 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/cinder-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/cinder-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:01,538 p=1004 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-10-02 08:31:01,538 p=1004 u=mistral | Tuesday 02 October 2018 08:31:01 -0400 (0:00:00.479) 0:02:14.271 ******* >2018-10-02 08:31:01,600 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:01,614 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,007 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292", "msg": "Destination directory /var/log/cinder does not exist"} >2018-10-02 08:31:02,007 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:02,033 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:31:02,033 p=1004 u=mistral | Tuesday 02 October 2018 08:31:02 -0400 (0:00:00.495) 0:02:14.767 ******* >2018-10-02 08:31:02,091 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,092 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,108 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,114 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,243 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/cinder) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:02,383 p=1004 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:02,412 p=1004 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-10-02 08:31:02,413 p=1004 u=mistral | Tuesday 02 October 2018 08:31:02 -0400 (0:00:00.379) 0:02:15.146 ******* >2018-10-02 08:31:02,472 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,485 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,606 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:02,634 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:31:02,634 p=1004 u=mistral | Tuesday 02 October 2018 08:31:02 -0400 (0:00:00.221) 0:02:15.367 ******* >2018-10-02 08:31:02,692 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,711 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,845 p=1004 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:02,871 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:31:02,871 p=1004 u=mistral | Tuesday 02 October 2018 08:31:02 -0400 (0:00:00.237) 0:02:15.604 ******* >2018-10-02 08:31:02,929 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,930 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,960 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:02,961 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,084 p=1004 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:03,249 p=1004 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:03,275 p=1004 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-10-02 08:31:03,276 p=1004 u=mistral | Tuesday 02 October 2018 08:31:03 -0400 (0:00:00.404) 0:02:16.009 ******* >2018-10-02 08:31:03,332 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,333 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"cinder_enable_iscsi_backend": false}, "changed": false} >2018-10-02 08:31:03,346 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,383 p=1004 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-10-02 08:31:03,384 p=1004 u=mistral | Tuesday 02 October 2018 08:31:03 -0400 (0:00:00.108) 0:02:16.117 ******* >2018-10-02 08:31:03,418 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,445 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,458 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,484 p=1004 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-10-02 08:31:03,484 p=1004 u=mistral | Tuesday 02 October 2018 08:31:03 -0400 (0:00:00.100) 0:02:16.218 ******* >2018-10-02 08:31:03,515 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,543 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,561 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,585 p=1004 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 08:31:03,585 p=1004 u=mistral | Tuesday 02 October 2018 08:31:03 -0400 (0:00:00.100) 0:02:16.319 ******* >2018-10-02 08:31:03,639 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,640 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >2018-10-02 08:31:03,650 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,672 p=1004 u=mistral | TASK [include_role] ************************************************************ >2018-10-02 08:31:03,672 p=1004 u=mistral | Tuesday 02 October 2018 08:31:03 -0400 (0:00:00.086) 0:02:16.406 ******* >2018-10-02 08:31:03,721 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,731 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:03,808 p=1004 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-10-02 08:31:03,808 p=1004 u=mistral | Tuesday 02 October 2018 08:31:03 -0400 (0:00:00.135) 0:02:16.541 ******* >2018-10-02 08:31:04,128 p=1004 u=mistral | changed: [controller-0] => {"changed": true} >2018-10-02 08:31:04,155 p=1004 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-10-02 08:31:04,156 p=1004 u=mistral | Tuesday 02 October 2018 08:31:04 -0400 (0:00:00.347) 0:02:16.889 ******* >2018-10-02 08:31:04,665 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-75.git8633870.el7_5.x86_64 providing docker is already installed"]} >2018-10-02 08:31:04,692 p=1004 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-10-02 08:31:04,693 p=1004 u=mistral | Tuesday 02 October 2018 08:31:04 -0400 (0:00:00.536) 0:02:17.426 ******* >2018-10-02 08:31:04,899 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:04,925 p=1004 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-10-02 08:31:04,925 p=1004 u=mistral | Tuesday 02 October 2018 08:31:04 -0400 (0:00:00.232) 0:02:17.658 ******* >2018-10-02 08:31:05,264 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-10-02 08:31:05,288 p=1004 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-10-02 08:31:05,288 p=1004 u=mistral | Tuesday 02 October 2018 08:31:05 -0400 (0:00:00.362) 0:02:18.021 ******* >2018-10-02 08:31:05,510 p=1004 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 08:31:05,535 p=1004 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-10-02 08:31:05,536 p=1004 u=mistral | Tuesday 02 October 2018 08:31:05 -0400 (0:00:00.247) 0:02:18.269 ******* >2018-10-02 08:31:05,759 p=1004 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-10-02 08:31:05,782 p=1004 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-10-02 08:31:05,783 p=1004 u=mistral | Tuesday 02 October 2018 08:31:05 -0400 (0:00:00.246) 0:02:18.516 ******* >2018-10-02 08:31:05,991 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:06,030 p=1004 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-10-02 08:31:06,030 p=1004 u=mistral | Tuesday 02 October 2018 08:31:06 -0400 (0:00:00.247) 0:02:18.764 ******* >2018-10-02 08:31:06,571 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483466.08-103451774479538/source", "state": "file", "uid": 0} >2018-10-02 08:31:06,597 p=1004 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-10-02 08:31:06,597 p=1004 u=mistral | Tuesday 02 October 2018 08:31:06 -0400 (0:00:00.566) 0:02:19.330 ******* >2018-10-02 08:31:06,812 p=1004 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 08:31:06,835 p=1004 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-10-02 08:31:06,836 p=1004 u=mistral | Tuesday 02 October 2018 08:31:06 -0400 (0:00:00.238) 0:02:19.569 ******* >2018-10-02 08:31:07,049 p=1004 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 08:31:07,073 p=1004 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-10-02 08:31:07,073 p=1004 u=mistral | Tuesday 02 October 2018 08:31:07 -0400 (0:00:00.237) 0:02:19.807 ******* >2018-10-02 08:31:07,457 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-10-02 08:31:07,482 p=1004 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-10-02 08:31:07,483 p=1004 u=mistral | Tuesday 02 October 2018 08:31:07 -0400 (0:00:00.409) 0:02:20.216 ******* >2018-10-02 08:31:07,503 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:07,505 p=1004 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-10-02 08:31:07,505 p=1004 u=mistral | Tuesday 02 October 2018 08:31:07 -0400 (0:00:00.022) 0:02:20.238 ******* >2018-10-02 08:31:07,752 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002239", "end": "2018-10-02 08:31:07.698956", "rc": 0, "start": "2018-10-02 08:31:07.696717", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} >2018-10-02 08:31:07,753 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >2018-10-02 08:31:07,753 p=1004 u=mistral | Tuesday 02 October 2018 08:31:07 -0400 (0:00:00.248) 0:02:20.487 ******* >2018-10-02 08:31:08,219 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "name": null, "status": {}} >2018-10-02 08:31:08,219 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >2018-10-02 08:31:08,220 p=1004 u=mistral | Tuesday 02 October 2018 08:31:08 -0400 (0:00:00.466) 0:02:20.953 ******* >2018-10-02 08:31:09,787 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "rhel-push-plugin.socket system.slice systemd-journald.socket docker-storage-setup.service registries.service network.target basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "rhel-push-plugin.socket docker-cleanup.timer basic.target registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-10-02 08:31:09,789 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >2018-10-02 08:31:09,789 p=1004 u=mistral | Tuesday 02 October 2018 08:31:09 -0400 (0:00:01.569) 0:02:22.522 ******* >2018-10-02 08:31:09,857 p=1004 u=mistral | Pausing for 10 seconds >2018-10-02 08:31:09,857 p=1004 u=mistral | (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >2018-10-02 08:31:09,857 p=1004 u=mistral | [container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >2018-10-02 08:31:19,860 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-10-02 08:31:09.856861", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-10-02 08:31:19.856997", "user_input": ""} >2018-10-02 08:31:19,860 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >2018-10-02 08:31:19,861 p=1004 u=mistral | Tuesday 02 October 2018 08:31:19 -0400 (0:00:10.071) 0:02:32.594 ******* >2018-10-02 08:31:20,136 p=1004 u=mistral | changed: [controller-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.038981", "end": "2018-10-02 08:31:20.108085", "rc": 0, "start": "2018-10-02 08:31:20.069104", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} >2018-10-02 08:31:20,161 p=1004 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-10-02 08:31:20,161 p=1004 u=mistral | Tuesday 02 October 2018 08:31:20 -0400 (0:00:00.300) 0:02:32.895 ******* >2018-10-02 08:31:20,459 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Tue 2018-10-02 08:31:09 EDT", "ActiveEnterTimestampMonotonic": "340565274", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "rhel-push-plugin.socket system.slice systemd-journald.socket docker-storage-setup.service registries.service network.target basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 08:31:08 EDT", "AssertTimestampMonotonic": "339379704", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 08:31:08 EDT", "ConditionTimestampMonotonic": "339379704", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "15151", "ExecMainStartTimestamp": "Tue 2018-10-02 08:31:08 EDT", "ExecMainStartTimestampMonotonic": "339381395", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Tue 2018-10-02 08:31:08 EDT] ; stop_time=[n/a] ; pid=15151 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 08:31:08 EDT", "InactiveExitTimestampMonotonic": "339381434", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "15151", "MemoryAccounting": "no", "MemoryCurrent": "69820416", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "rhel-push-plugin.socket docker-cleanup.timer basic.target registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "26", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Tue 2018-10-02 08:31:09 EDT", "WatchdogTimestampMonotonic": "340565094", "WatchdogUSec": "0"}} >2018-10-02 08:31:20,486 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:20,487 p=1004 u=mistral | Tuesday 02 October 2018 08:31:20 -0400 (0:00:00.325) 0:02:33.220 ******* >2018-10-02 08:31:20,547 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:20,565 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:20,705 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/glance) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/glance", "mode": "0755", "owner": "root", "path": "/var/log/containers/glance", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:20,732 p=1004 u=mistral | TASK [glance logs readme] ****************************************************** >2018-10-02 08:31:20,732 p=1004 u=mistral | Tuesday 02 October 2018 08:31:20 -0400 (0:00:00.245) 0:02:33.465 ******* >2018-10-02 08:31:20,789 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:20,802 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,207 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "e368ae3272baeb19e1113009ea5dae00e797c919", "msg": "Destination directory /var/log/glance does not exist"} >2018-10-02 08:31:21,207 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:21,234 p=1004 u=mistral | TASK [Set glance remote_file_path fact] **************************************** >2018-10-02 08:31:21,235 p=1004 u=mistral | Tuesday 02 October 2018 08:31:21 -0400 (0:00:00.502) 0:02:33.968 ******* >2018-10-02 08:31:21,266 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,293 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,306 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,332 p=1004 u=mistral | TASK [Create glance remote_file_path] ****************************************** >2018-10-02 08:31:21,332 p=1004 u=mistral | Tuesday 02 October 2018 08:31:21 -0400 (0:00:00.097) 0:02:34.065 ******* >2018-10-02 08:31:21,365 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,392 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,403 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,430 p=1004 u=mistral | TASK [stat] ******************************************************************** >2018-10-02 08:31:21,430 p=1004 u=mistral | Tuesday 02 October 2018 08:31:21 -0400 (0:00:00.098) 0:02:34.163 ******* >2018-10-02 08:31:21,460 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,487 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,498 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,524 p=1004 u=mistral | TASK [copy] ******************************************************************** >2018-10-02 08:31:21,524 p=1004 u=mistral | Tuesday 02 October 2018 08:31:21 -0400 (0:00:00.094) 0:02:34.257 ******* >2018-10-02 08:31:21,556 p=1004 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,583 p=1004 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,601 p=1004 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,628 p=1004 u=mistral | TASK [Mount glance Netapp share] *********************************************** >2018-10-02 08:31:21,628 p=1004 u=mistral | Tuesday 02 October 2018 08:31:21 -0400 (0:00:00.104) 0:02:34.361 ******* >2018-10-02 08:31:21,659 p=1004 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,689 p=1004 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,708 p=1004 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,734 p=1004 u=mistral | TASK [Mount NFS on host] ******************************************************* >2018-10-02 08:31:21,735 p=1004 u=mistral | Tuesday 02 October 2018 08:31:21 -0400 (0:00:00.106) 0:02:34.468 ******* >2018-10-02 08:31:21,766 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,793 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,805 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,832 p=1004 u=mistral | TASK [Mount Node Staging Location] ********************************************* >2018-10-02 08:31:21,832 p=1004 u=mistral | Tuesday 02 October 2018 08:31:21 -0400 (0:00:00.097) 0:02:34.566 ******* >2018-10-02 08:31:21,862 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,886 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,899 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,923 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:21,923 p=1004 u=mistral | Tuesday 02 October 2018 08:31:21 -0400 (0:00:00.090) 0:02:34.656 ******* >2018-10-02 08:31:21,977 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:21,978 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:22,000 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:22,007 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:22,131 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/gnocchi) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/gnocchi", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:22,293 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/gnocchi-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/gnocchi-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:22,318 p=1004 u=mistral | TASK [gnocchi logs readme] ***************************************************** >2018-10-02 08:31:22,319 p=1004 u=mistral | Tuesday 02 October 2018 08:31:22 -0400 (0:00:00.395) 0:02:35.052 ******* >2018-10-02 08:31:22,373 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:22,387 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:22,786 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "2f6114e0f135d7222e70a07579ab0b2b6f967ff8", "msg": "Destination directory /var/log/gnocchi does not exist"} >2018-10-02 08:31:22,787 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:22,813 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:22,813 p=1004 u=mistral | Tuesday 02 October 2018 08:31:22 -0400 (0:00:00.494) 0:02:35.546 ******* >2018-10-02 08:31:22,865 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:22,880 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,004 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:23,030 p=1004 u=mistral | TASK [get parameters] ********************************************************** >2018-10-02 08:31:23,030 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.217) 0:02:35.763 ******* >2018-10-02 08:31:23,090 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:31:23,091 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:31:23,103 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:31:23,127 p=1004 u=mistral | TASK [get DeployedSSLCertificatePath attributes] ******************************* >2018-10-02 08:31:23,127 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.097) 0:02:35.861 ******* >2018-10-02 08:31:23,159 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,185 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,198 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,223 p=1004 u=mistral | TASK [Assign bootstrap node] *************************************************** >2018-10-02 08:31:23,223 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.095) 0:02:35.956 ******* >2018-10-02 08:31:23,255 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,282 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,294 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,319 p=1004 u=mistral | TASK [set is_bootstrap_node fact] ********************************************** >2018-10-02 08:31:23,320 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.096) 0:02:36.053 ******* >2018-10-02 08:31:23,350 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,380 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,392 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,417 p=1004 u=mistral | TASK [get haproxy status] ****************************************************** >2018-10-02 08:31:23,417 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.097) 0:02:36.150 ******* >2018-10-02 08:31:23,447 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,473 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,485 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,509 p=1004 u=mistral | TASK [get pacemaker status] **************************************************** >2018-10-02 08:31:23,509 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.092) 0:02:36.242 ******* >2018-10-02 08:31:23,539 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,564 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,576 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,600 p=1004 u=mistral | TASK [get docker status] ******************************************************* >2018-10-02 08:31:23,601 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.091) 0:02:36.334 ******* >2018-10-02 08:31:23,630 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,654 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,672 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,699 p=1004 u=mistral | TASK [get container_id] ******************************************************** >2018-10-02 08:31:23,699 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.098) 0:02:36.432 ******* >2018-10-02 08:31:23,729 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,755 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,767 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,793 p=1004 u=mistral | TASK [get pcs resource name for haproxy container] ***************************** >2018-10-02 08:31:23,794 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.094) 0:02:36.527 ******* >2018-10-02 08:31:23,825 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,852 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,865 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,891 p=1004 u=mistral | TASK [remove DeployedSSLCertificatePath if is dir] ***************************** >2018-10-02 08:31:23,891 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.097) 0:02:36.624 ******* >2018-10-02 08:31:23,921 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,948 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,961 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:23,987 p=1004 u=mistral | TASK [push certificate content] ************************************************ >2018-10-02 08:31:23,987 p=1004 u=mistral | Tuesday 02 October 2018 08:31:23 -0400 (0:00:00.096) 0:02:36.720 ******* >2018-10-02 08:31:24,019 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:31:24,046 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:31:24,060 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:31:24,085 p=1004 u=mistral | TASK [set certificate ownership] *********************************************** >2018-10-02 08:31:24,085 p=1004 u=mistral | Tuesday 02 October 2018 08:31:24 -0400 (0:00:00.098) 0:02:36.819 ******* >2018-10-02 08:31:24,117 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,144 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,157 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,184 p=1004 u=mistral | TASK [reload haproxy if enabled] *********************************************** >2018-10-02 08:31:24,184 p=1004 u=mistral | Tuesday 02 October 2018 08:31:24 -0400 (0:00:00.098) 0:02:36.918 ******* >2018-10-02 08:31:24,217 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,243 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,256 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,283 p=1004 u=mistral | TASK [restart pacemaker resource for haproxy] ********************************** >2018-10-02 08:31:24,283 p=1004 u=mistral | Tuesday 02 October 2018 08:31:24 -0400 (0:00:00.098) 0:02:37.016 ******* >2018-10-02 08:31:24,347 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,348 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,360 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,429 p=1004 u=mistral | TASK [set kolla_dir fact] ****************************************************** >2018-10-02 08:31:24,429 p=1004 u=mistral | Tuesday 02 October 2018 08:31:24 -0400 (0:00:00.146) 0:02:37.163 ******* >2018-10-02 08:31:24,462 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,489 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,503 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,532 p=1004 u=mistral | TASK [assert {{ kolla_dir }}{{ cert_path }} exists] **************************** >2018-10-02 08:31:24,532 p=1004 u=mistral | Tuesday 02 October 2018 08:31:24 -0400 (0:00:00.102) 0:02:37.265 ******* >2018-10-02 08:31:24,563 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,590 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,603 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,630 p=1004 u=mistral | TASK [set certificate group on host via container] ***************************** >2018-10-02 08:31:24,630 p=1004 u=mistral | Tuesday 02 October 2018 08:31:24 -0400 (0:00:00.097) 0:02:37.363 ******* >2018-10-02 08:31:24,660 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,687 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,700 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,726 p=1004 u=mistral | TASK [copy certificate from kolla directory to final location] ***************** >2018-10-02 08:31:24,726 p=1004 u=mistral | Tuesday 02 October 2018 08:31:24 -0400 (0:00:00.096) 0:02:37.460 ******* >2018-10-02 08:31:24,759 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,784 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,797 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,820 p=1004 u=mistral | TASK [send restart order to haproxy container] ********************************* >2018-10-02 08:31:24,820 p=1004 u=mistral | Tuesday 02 October 2018 08:31:24 -0400 (0:00:00.093) 0:02:37.553 ******* >2018-10-02 08:31:24,848 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,873 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,887 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,910 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:31:24,910 p=1004 u=mistral | Tuesday 02 October 2018 08:31:24 -0400 (0:00:00.089) 0:02:37.643 ******* >2018-10-02 08:31:24,962 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:24,978 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:25,134 p=1004 u=mistral | ok: [controller-0] => (item=/var/lib/haproxy) => {"changed": false, "gid": 188, "group": "haproxy", "item": "/var/lib/haproxy", "mode": "0755", "owner": "haproxy", "path": "/var/lib/haproxy", "secontext": "system_u:object_r:haproxy_var_lib_t:s0", "size": 6, "state": "directory", "uid": 188} >2018-10-02 08:31:25,162 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:25,162 p=1004 u=mistral | Tuesday 02 October 2018 08:31:25 -0400 (0:00:00.252) 0:02:37.896 ******* >2018-10-02 08:31:25,226 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:25,227 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:25,243 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:25,250 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:25,378 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/heat) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:25,555 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:25,583 p=1004 u=mistral | TASK [heat logs readme] ******************************************************** >2018-10-02 08:31:25,583 p=1004 u=mistral | Tuesday 02 October 2018 08:31:25 -0400 (0:00:00.420) 0:02:38.317 ******* >2018-10-02 08:31:25,641 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:25,656 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,055 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "d30ca3bda176434d31659e7379616dd162ddb246", "msg": "Destination directory /var/log/heat does not exist"} >2018-10-02 08:31:26,056 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:26,082 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:26,082 p=1004 u=mistral | Tuesday 02 October 2018 08:31:26 -0400 (0:00:00.498) 0:02:38.815 ******* >2018-10-02 08:31:26,142 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,145 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,160 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,166 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,298 p=1004 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:26,462 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api-cfn", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api-cfn", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:26,488 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:26,488 p=1004 u=mistral | Tuesday 02 October 2018 08:31:26 -0400 (0:00:00.405) 0:02:39.221 ******* >2018-10-02 08:31:26,543 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,558 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,693 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:26,730 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:26,730 p=1004 u=mistral | Tuesday 02 October 2018 08:31:26 -0400 (0:00:00.242) 0:02:39.464 ******* >2018-10-02 08:31:26,795 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,797 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,818 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,824 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:26,957 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/horizon) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:27,112 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/horizon) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:27,141 p=1004 u=mistral | TASK [horizon logs readme] ***************************************************** >2018-10-02 08:31:27,142 p=1004 u=mistral | Tuesday 02 October 2018 08:31:27 -0400 (0:00:00.411) 0:02:39.875 ******* >2018-10-02 08:31:27,211 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:27,224 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:27,641 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ac324739761cb36b925d6e309482e26f7fe49b91", "msg": "Destination directory /var/log/horizon does not exist"} >2018-10-02 08:31:27,641 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:27,667 p=1004 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-10-02 08:31:27,667 p=1004 u=mistral | Tuesday 02 October 2018 08:31:27 -0400 (0:00:00.525) 0:02:40.401 ******* >2018-10-02 08:31:27,722 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:27,735 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:27,912 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1538483468.1615233, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1537979153.5750983, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 2886261, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "1807870409", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-10-02 08:31:27,939 p=1004 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-10-02 08:31:27,939 p=1004 u=mistral | Tuesday 02 October 2018 08:31:27 -0400 (0:00:00.271) 0:02:40.672 ******* >2018-10-02 08:31:28,002 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:28,017 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:28,237 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Tue 2018-10-02 08:25:32 EDT", "ActiveEnterTimestampMonotonic": "3525938", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 08:25:32 EDT", "AssertTimestampMonotonic": "3525113", "Backlog": "128", "Before": "sockets.target shutdown.target iscsid.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 08:25:32 EDT", "ConditionTimestampMonotonic": "3525113", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 08:25:32 EDT", "InactiveExitTimestampMonotonic": "3525938", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127792", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "sockets.target", "Wants": "-.slice"}} >2018-10-02 08:31:28,263 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:28,264 p=1004 u=mistral | Tuesday 02 October 2018 08:31:28 -0400 (0:00:00.324) 0:02:40.997 ******* >2018-10-02 08:31:28,321 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:28,322 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:28,339 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:28,345 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:28,471 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/keystone) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:28,640 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/keystone) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:28,665 p=1004 u=mistral | TASK [keystone logs readme] **************************************************** >2018-10-02 08:31:28,665 p=1004 u=mistral | Tuesday 02 October 2018 08:31:28 -0400 (0:00:00.401) 0:02:41.399 ******* >2018-10-02 08:31:28,718 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:28,734 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:29,148 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "910be882addb6df99267e9bd303f6d9bf658562e", "msg": "Destination directory /var/log/keystone does not exist"} >2018-10-02 08:31:29,148 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:29,174 p=1004 u=mistral | TASK [memcached logs readme] *************************************************** >2018-10-02 08:31:29,174 p=1004 u=mistral | Tuesday 02 October 2018 08:31:29 -0400 (0:00:00.508) 0:02:41.907 ******* >2018-10-02 08:31:29,232 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:29,247 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:29,717 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "3b6f3952a077d2e5003df30c8c439478917cb6c4", "dest": "/var/log/memcached-readme.txt", "gid": 0, "group": "root", "md5sum": "ffdb1524e5789470856ae32ded4e2f80", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_log_t:s0", "size": 48, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483489.22-150581140366453/source", "state": "file", "uid": 0} >2018-10-02 08:31:29,743 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:31:29,744 p=1004 u=mistral | Tuesday 02 October 2018 08:31:29 -0400 (0:00:00.569) 0:02:42.477 ******* >2018-10-02 08:31:29,806 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:29,807 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:29,823 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:29,836 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:29,965 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/mysql) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/mysql", "mode": "0755", "owner": "root", "path": "/var/log/containers/mysql", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:30,142 p=1004 u=mistral | ok: [controller-0] => (item=/var/lib/mysql) => {"changed": false, "gid": 27, "group": "mysql", "item": "/var/lib/mysql", "mode": "0755", "owner": "mysql", "path": "/var/lib/mysql", "secontext": "system_u:object_r:mysqld_db_t:s0", "size": 6, "state": "directory", "uid": 27} >2018-10-02 08:31:30,170 p=1004 u=mistral | TASK [mysql logs readme] ******************************************************* >2018-10-02 08:31:30,171 p=1004 u=mistral | Tuesday 02 October 2018 08:31:30 -0400 (0:00:00.427) 0:02:42.904 ******* >2018-10-02 08:31:30,233 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:30,248 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:30,709 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "de8fb5fe96200ab286121f8a09419702bd693743", "dest": "/var/log/mariadb/readme.txt", "gid": 0, "group": "root", "md5sum": "1f3e80eed7060dfe5ee49c8063244c53", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:mysqld_log_t:s0", "size": 78, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483490.21-86103643492637/source", "state": "file", "uid": 0} >2018-10-02 08:31:30,735 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:30,735 p=1004 u=mistral | Tuesday 02 October 2018 08:31:30 -0400 (0:00:00.564) 0:02:43.468 ******* >2018-10-02 08:31:30,793 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:30,794 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:30,811 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:30,819 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:30,941 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/neutron) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:31,108 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/neutron-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/neutron-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:31,138 p=1004 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-10-02 08:31:31,138 p=1004 u=mistral | Tuesday 02 October 2018 08:31:31 -0400 (0:00:00.403) 0:02:43.872 ******* >2018-10-02 08:31:31,195 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:31,209 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:31,603 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-10-02 08:31:31,603 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:31,630 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:31,630 p=1004 u=mistral | Tuesday 02 October 2018 08:31:31 -0400 (0:00:00.491) 0:02:44.364 ******* >2018-10-02 08:31:31,687 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:31,705 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:31,842 p=1004 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:31,870 p=1004 u=mistral | TASK [create /var/lib/neutron] ************************************************* >2018-10-02 08:31:31,870 p=1004 u=mistral | Tuesday 02 October 2018 08:31:31 -0400 (0:00:00.239) 0:02:44.603 ******* >2018-10-02 08:31:31,927 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:31,941 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:32,072 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/neutron", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:32,097 p=1004 u=mistral | TASK [Copy in cleanup script] ************************************************** >2018-10-02 08:31:32,097 p=1004 u=mistral | Tuesday 02 October 2018 08:31:32 -0400 (0:00:00.227) 0:02:44.830 ******* >2018-10-02 08:31:32,152 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:32,165 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:32,617 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "659dc874a58142f127a275d34c6d90d27b3a4150", "dest": "/usr/libexec/neutron-cleanup", "gid": 0, "group": "root", "md5sum": "e5ee7754f01168fb9053e4dd66eff58c", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:bin_t:s0", "size": 675, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483492.14-133149799525624/source", "state": "file", "uid": 0} >2018-10-02 08:31:32,644 p=1004 u=mistral | TASK [Copy in cleanup service] ************************************************* >2018-10-02 08:31:32,645 p=1004 u=mistral | Tuesday 02 October 2018 08:31:32 -0400 (0:00:00.547) 0:02:45.378 ******* >2018-10-02 08:31:32,704 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:32,715 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:33,173 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "1950d05f025c3db49014a49372fce15fa9014693", "dest": "/usr/lib/systemd/system/neutron-cleanup.service", "gid": 0, "group": "root", "md5sum": "0dd683a7d38da6dfb537927032db6f22", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:neutron_unit_file_t:s0", "size": 231, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483492.69-112921413159355/source", "state": "file", "uid": 0} >2018-10-02 08:31:33,199 p=1004 u=mistral | TASK [Enabling the cleanup service] ******************************************** >2018-10-02 08:31:33,199 p=1004 u=mistral | Tuesday 02 October 2018 08:31:33 -0400 (0:00:00.554) 0:02:45.933 ******* >2018-10-02 08:31:33,260 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:33,273 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:33,530 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "enabled": true, "name": "neutron-cleanup", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "basic.target system.slice systemd-journald.socket network.target openvswitch.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target docker.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "no", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Neutron cleanup on startup", "DevicePolicy": "auto", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/libexec/neutron-cleanup ; argv[]=/usr/libexec/neutron-cleanup ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/neutron-cleanup.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "neutron-cleanup.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127792", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "neutron-cleanup.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "yes", "RemainAfterExit": "no", "Requires": "basic.target", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-10-02 08:31:33,557 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:33,557 p=1004 u=mistral | Tuesday 02 October 2018 08:31:33 -0400 (0:00:00.358) 0:02:46.291 ******* >2018-10-02 08:31:33,618 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:33,619 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:33,643 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:33,644 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:33,820 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/nova) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:33,981 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:34,008 p=1004 u=mistral | TASK [nova logs readme] ******************************************************** >2018-10-02 08:31:34,008 p=1004 u=mistral | Tuesday 02 October 2018 08:31:34 -0400 (0:00:00.450) 0:02:46.741 ******* >2018-10-02 08:31:34,118 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:34,132 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:34,533 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-10-02 08:31:34,534 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:34,561 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:34,561 p=1004 u=mistral | Tuesday 02 October 2018 08:31:34 -0400 (0:00:00.553) 0:02:47.295 ******* >2018-10-02 08:31:34,623 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:34,636 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:34,754 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:34,781 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:34,781 p=1004 u=mistral | Tuesday 02 October 2018 08:31:34 -0400 (0:00:00.219) 0:02:47.515 ******* >2018-10-02 08:31:34,839 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:34,840 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:34,860 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:34,870 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:34,992 p=1004 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:35,144 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-placement", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-placement", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:35,174 p=1004 u=mistral | TASK [NTP settings] ************************************************************ >2018-10-02 08:31:35,175 p=1004 u=mistral | Tuesday 02 October 2018 08:31:35 -0400 (0:00:00.393) 0:02:47.908 ******* >2018-10-02 08:31:35,232 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:35,234 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["clock.redhat.com"]}, "changed": false} >2018-10-02 08:31:35,245 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:35,270 p=1004 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-10-02 08:31:35,271 p=1004 u=mistral | Tuesday 02 October 2018 08:31:35 -0400 (0:00:00.096) 0:02:48.004 ******* >2018-10-02 08:31:35,301 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:35,328 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:35,340 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:35,366 p=1004 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-10-02 08:31:35,366 p=1004 u=mistral | Tuesday 02 October 2018 08:31:35 -0400 (0:00:00.095) 0:02:48.099 ******* >2018-10-02 08:31:35,423 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:35,436 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:42,434 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": ["ntpdate", "-u", "clock.redhat.com"], "delta": "0:00:06.864897", "end": "2018-10-02 08:31:42.413757", "rc": 0, "start": "2018-10-02 08:31:35.548860", "stderr": "", "stderr_lines": [], "stdout": " 2 Oct 08:31:42 ntpdate[16554]: adjust time server 10.11.160.238 offset -0.000870 sec", "stdout_lines": [" 2 Oct 08:31:42 ntpdate[16554]: adjust time server 10.11.160.238 offset -0.000870 sec"]} >2018-10-02 08:31:42,460 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:42,460 p=1004 u=mistral | Tuesday 02 October 2018 08:31:42 -0400 (0:00:07.094) 0:02:55.193 ******* >2018-10-02 08:31:42,521 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:42,522 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:42,538 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:42,545 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:42,657 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/panko) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/panko", "mode": "0755", "owner": "root", "path": "/var/log/containers/panko", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:42,819 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/panko-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/panko-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:42,849 p=1004 u=mistral | TASK [panko logs readme] ******************************************************* >2018-10-02 08:31:42,849 p=1004 u=mistral | Tuesday 02 October 2018 08:31:42 -0400 (0:00:00.388) 0:02:55.582 ******* >2018-10-02 08:31:42,906 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:42,920 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:43,347 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "903397bbd82e9b1f53087e3d7e8975d851857ce2", "msg": "Destination directory /var/log/panko does not exist"} >2018-10-02 08:31:43,347 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:43,373 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:31:43,373 p=1004 u=mistral | Tuesday 02 October 2018 08:31:43 -0400 (0:00:00.524) 0:02:56.106 ******* >2018-10-02 08:31:43,426 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:43,430 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:43,449 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:43,457 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:43,587 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/rabbitmq) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/lib/rabbitmq", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:43,752 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/rabbitmq) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/log/containers/rabbitmq", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:43,781 p=1004 u=mistral | TASK [rabbitmq logs readme] **************************************************** >2018-10-02 08:31:43,781 p=1004 u=mistral | Tuesday 02 October 2018 08:31:43 -0400 (0:00:00.408) 0:02:56.515 ******* >2018-10-02 08:31:43,841 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:43,857 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:44,276 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ee241f2199f264c9d0f384cf389fe255e8bf8a77", "msg": "Destination directory /var/log/rabbitmq does not exist"} >2018-10-02 08:31:44,276 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:44,303 p=1004 u=mistral | TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] *** >2018-10-02 08:31:44,303 p=1004 u=mistral | Tuesday 02 October 2018 08:31:44 -0400 (0:00:00.521) 0:02:57.037 ******* >2018-10-02 08:31:44,361 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:44,376 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:44,550 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "echo 'export ERL_EPMD_ADDRESS=127.0.0.1' > /etc/rabbitmq/rabbitmq-env.conf\n echo 'export ERL_EPMD_PORT=4370' >> /etc/rabbitmq/rabbitmq-env.conf\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done", "delta": "0:00:00.043018", "end": "2018-10-02 08:31:44.529359", "rc": 0, "start": "2018-10-02 08:31:44.486341", "stderr": "/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory\n/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "stderr_lines": ["/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory"], "stdout": "", "stdout_lines": []} >2018-10-02 08:31:44,579 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:31:44,579 p=1004 u=mistral | Tuesday 02 October 2018 08:31:44 -0400 (0:00:00.275) 0:02:57.312 ******* >2018-10-02 08:31:44,638 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:44,639 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:44,640 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:44,657 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:44,663 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:44,669 p=1004 u=mistral | skipping: [compute-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:44,787 p=1004 u=mistral | ok: [controller-0] => (item=/var/lib/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/lib/redis", "mode": "0750", "owner": "redis", "path": "/var/lib/redis", "secontext": "system_u:object_r:redis_var_lib_t:s0", "size": 6, "state": "directory", "uid": 992} >2018-10-02 08:31:44,949 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/containers/redis) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/redis", "mode": "0755", "owner": "root", "path": "/var/log/containers/redis", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:45,111 p=1004 u=mistral | ok: [controller-0] => (item=/var/run/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/run/redis", "mode": "0755", "owner": "redis", "path": "/var/run/redis", "secontext": "system_u:object_r:redis_var_run_t:s0", "size": 40, "state": "directory", "uid": 992} >2018-10-02 08:31:45,139 p=1004 u=mistral | TASK [redis logs readme] ******************************************************* >2018-10-02 08:31:45,140 p=1004 u=mistral | Tuesday 02 October 2018 08:31:45 -0400 (0:00:00.560) 0:02:57.873 ******* >2018-10-02 08:31:45,196 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:45,210 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:45,637 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "42d03af8abf93e87fdb3fc69702638fc81d943fb", "dest": "/var/log/redis/readme.txt", "gid": 0, "group": "root", "md5sum": "26fc3dbfb40d3414a608e987cc577748", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:redis_log_t:s0", "size": 78, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483505.18-145993368881280/source", "state": "file", "uid": 0} >2018-10-02 08:31:45,661 p=1004 u=mistral | TASK [create /var/lib/sahara] ************************************************** >2018-10-02 08:31:45,661 p=1004 u=mistral | Tuesday 02 October 2018 08:31:45 -0400 (0:00:00.521) 0:02:58.395 ******* >2018-10-02 08:31:45,716 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:45,729 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:45,852 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/sahara", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:45,880 p=1004 u=mistral | TASK [create persistent sahara logs directory] ********************************* >2018-10-02 08:31:45,880 p=1004 u=mistral | Tuesday 02 October 2018 08:31:45 -0400 (0:00:00.219) 0:02:58.614 ******* >2018-10-02 08:31:45,939 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:45,952 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:46,070 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/sahara", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:46,093 p=1004 u=mistral | TASK [sahara logs readme] ****************************************************** >2018-10-02 08:31:46,093 p=1004 u=mistral | Tuesday 02 October 2018 08:31:46 -0400 (0:00:00.213) 0:02:58.827 ******* >2018-10-02 08:31:46,142 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:46,155 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:46,534 p=1004 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b0212a1177fa4a88502d17a1cbc31198040cf047", "msg": "Destination directory /var/log/sahara does not exist"} >2018-10-02 08:31:46,535 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:46,558 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:31:46,558 p=1004 u=mistral | Tuesday 02 October 2018 08:31:46 -0400 (0:00:00.464) 0:02:59.292 ******* >2018-10-02 08:31:46,614 p=1004 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:46,616 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:46,631 p=1004 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:46,637 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:46,762 p=1004 u=mistral | changed: [controller-0] => (item=/srv/node) => {"changed": true, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:46,930 p=1004 u=mistral | changed: [controller-0] => (item=/var/log/swift) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:46,964 p=1004 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-10-02 08:31:46,964 p=1004 u=mistral | Tuesday 02 October 2018 08:31:46 -0400 (0:00:00.405) 0:02:59.697 ******* >2018-10-02 08:31:47,042 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:47,056 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:47,186 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "dest": "/var/log/containers/swift", "gid": 0, "group": "root", "mode": "0777", "owner": "root", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 14, "src": "/var/log/swift", "state": "link", "uid": 0} >2018-10-02 08:31:47,213 p=1004 u=mistral | TASK [Check if rsyslog exists] ************************************************* >2018-10-02 08:31:47,213 p=1004 u=mistral | Tuesday 02 October 2018 08:31:47 -0400 (0:00:00.248) 0:02:59.946 ******* >2018-10-02 08:31:47,276 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:47,291 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:47,483 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1538483135.4254978, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1537979117.8760984, "dev": 64514, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 588, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0755", "mtime": 1537975062.799, "nlink": 2, "path": "/etc/rsyslog.d", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 50, "uid": 0, "version": "18446744072318778956", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true}} >2018-10-02 08:31:47,509 p=1004 u=mistral | TASK [Forward logging to swift.log file] *************************************** >2018-10-02 08:31:47,509 p=1004 u=mistral | Tuesday 02 October 2018 08:31:47 -0400 (0:00:00.296) 0:03:00.243 ******* >2018-10-02 08:31:47,564 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:47,576 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:48,036 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "828097d22e649626706b267b5a61f05e49999586", "dest": "/etc/rsyslog.d/openstack-swift.conf", "gid": 0, "group": "root", "md5sum": "2118142de3156b2432c5c12816a4967c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:syslog_conf_t:s0", "size": 138, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483507.6-240978731276316/source", "state": "file", "uid": 0} >2018-10-02 08:31:48,063 p=1004 u=mistral | TASK [Restart rsyslogd service after logging conf change] ********************** >2018-10-02 08:31:48,064 p=1004 u=mistral | Tuesday 02 October 2018 08:31:48 -0400 (0:00:00.554) 0:03:00.797 ******* >2018-10-02 08:31:48,085 p=1004 u=mistral | [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using >`result|changed` instead use `result is changed`. This feature will be removed >in version 2.9. Deprecation warnings can be disabled by setting >deprecation_warnings=False in ansible.cfg. >2018-10-02 08:31:48,122 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:48,136 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:48,385 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "name": "rsyslog", "state": "started", "status": {"ActiveEnterTimestamp": "Tue 2018-10-02 08:25:35 EDT", "ActiveEnterTimestampMonotonic": "6266824", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice basic.target network.target network-online.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 08:25:35 EDT", "AssertTimestampMonotonic": "6205253", "Before": "pacemaker.service multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 08:25:35 EDT", "ConditionTimestampMonotonic": "6205253", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/rsyslog.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "System Logging Service", "DevicePolicy": "auto", "Documentation": "man:rsyslogd(8) http://www.rsyslog.com/doc/", "EnvironmentFile": "/etc/sysconfig/rsyslog (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "1726", "ExecMainStartTimestamp": "Tue 2018-10-02 08:25:35 EDT", "ExecMainStartTimestampMonotonic": "6207451", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/sbin/rsyslogd ; argv[]=/usr/sbin/rsyslogd -n $SYSLOGD_OPTIONS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/rsyslog.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "rsyslog.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 08:25:35 EDT", "InactiveExitTimestampMonotonic": "6207494", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127792", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "1726", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "rsyslog.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "basic.target", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0066", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice network.target network-online.target", "WatchdogTimestamp": "Tue 2018-10-02 08:25:35 EDT", "WatchdogTimestampMonotonic": "6266786", "WatchdogUSec": "0"}} >2018-10-02 08:31:48,450 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:31:48,451 p=1004 u=mistral | Tuesday 02 October 2018 08:31:48 -0400 (0:00:00.386) 0:03:01.184 ******* >2018-10-02 08:31:48,510 p=1004 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:48,511 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:48,514 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:48,528 p=1004 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:48,534 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:48,540 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:48,650 p=1004 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:48,806 p=1004 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:48,969 p=1004 u=mistral | ok: [controller-0] => (item=/var/log/containers) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers", "mode": "0755", "owner": "root", "path": "/var/log/containers", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 244, "state": "directory", "uid": 0} >2018-10-02 08:31:48,999 p=1004 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-10-02 08:31:48,999 p=1004 u=mistral | Tuesday 02 October 2018 08:31:48 -0400 (0:00:00.548) 0:03:01.732 ******* >2018-10-02 08:31:49,058 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:49,060 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"swift_use_local_disks": true}, "changed": false} >2018-10-02 08:31:49,074 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:49,101 p=1004 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-10-02 08:31:49,101 p=1004 u=mistral | Tuesday 02 October 2018 08:31:49 -0400 (0:00:00.101) 0:03:01.834 ******* >2018-10-02 08:31:49,160 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:49,174 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:49,309 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/srv/node/d1", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:49,343 p=1004 u=mistral | TASK [swift logs readme] ******************************************************* >2018-10-02 08:31:49,343 p=1004 u=mistral | Tuesday 02 October 2018 08:31:49 -0400 (0:00:00.241) 0:03:02.076 ******* >2018-10-02 08:31:49,410 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:49,424 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:49,842 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "42510a6de124722d6efbc2b1bb038bfe97e5b6d3", "dest": "/var/log/swift/readme.txt", "gid": 0, "group": "root", "md5sum": "23163287d564762945ee1738f049dc10", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_log_t:s0", "size": 116, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483509.39-278826695362393/source", "state": "file", "uid": 0} >2018-10-02 08:31:49,868 p=1004 u=mistral | TASK [Set fact for SwiftRawDisks] ********************************************** >2018-10-02 08:31:49,868 p=1004 u=mistral | Tuesday 02 October 2018 08:31:49 -0400 (0:00:00.525) 0:03:02.602 ******* >2018-10-02 08:31:49,923 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:49,924 p=1004 u=mistral | ok: [controller-0] => {"ansible_facts": {"swift_raw_disks": {}}, "changed": false} >2018-10-02 08:31:49,935 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:49,960 p=1004 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-10-02 08:31:49,960 p=1004 u=mistral | Tuesday 02 October 2018 08:31:49 -0400 (0:00:00.091) 0:03:02.693 ******* >2018-10-02 08:31:50,019 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:50,034 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:50,062 p=1004 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-10-02 08:31:50,062 p=1004 u=mistral | Tuesday 02 October 2018 08:31:50 -0400 (0:00:00.101) 0:03:02.795 ******* >2018-10-02 08:31:50,122 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:50,138 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:50,164 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:50,164 p=1004 u=mistral | Tuesday 02 October 2018 08:31:50 -0400 (0:00:00.102) 0:03:02.898 ******* >2018-10-02 08:31:50,203 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:50,236 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:50,444 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:50,469 p=1004 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-10-02 08:31:50,469 p=1004 u=mistral | Tuesday 02 October 2018 08:31:50 -0400 (0:00:00.304) 0:03:03.202 ******* >2018-10-02 08:31:50,501 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:50,526 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:50,996 p=1004 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-10-02 08:31:50,996 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:51,021 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:31:51,021 p=1004 u=mistral | Tuesday 02 October 2018 08:31:51 -0400 (0:00:00.551) 0:03:03.754 ******* >2018-10-02 08:31:51,052 p=1004 u=mistral | skipping: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:51,078 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 08:31:51,288 p=1004 u=mistral | changed: [compute-0] => (item=/var/log/containers/neutron) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:51,311 p=1004 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-10-02 08:31:51,312 p=1004 u=mistral | Tuesday 02 October 2018 08:31:51 -0400 (0:00:00.290) 0:03:04.045 ******* >2018-10-02 08:31:51,341 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:51,364 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:51,832 p=1004 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-10-02 08:31:51,832 p=1004 u=mistral | ...ignoring >2018-10-02 08:31:51,858 p=1004 u=mistral | TASK [Copy in cleanup script] ************************************************** >2018-10-02 08:31:51,858 p=1004 u=mistral | Tuesday 02 October 2018 08:31:51 -0400 (0:00:00.546) 0:03:04.592 ******* >2018-10-02 08:31:51,890 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:51,916 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:52,429 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "659dc874a58142f127a275d34c6d90d27b3a4150", "dest": "/usr/libexec/neutron-cleanup", "gid": 0, "group": "root", "md5sum": "e5ee7754f01168fb9053e4dd66eff58c", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:bin_t:s0", "size": 675, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483511.95-201077416737690/source", "state": "file", "uid": 0} >2018-10-02 08:31:52,455 p=1004 u=mistral | TASK [Copy in cleanup service] ************************************************* >2018-10-02 08:31:52,455 p=1004 u=mistral | Tuesday 02 October 2018 08:31:52 -0400 (0:00:00.596) 0:03:05.189 ******* >2018-10-02 08:31:52,485 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:52,511 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:53,022 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "1950d05f025c3db49014a49372fce15fa9014693", "dest": "/usr/lib/systemd/system/neutron-cleanup.service", "gid": 0, "group": "root", "md5sum": "0dd683a7d38da6dfb537927032db6f22", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:neutron_unit_file_t:s0", "size": 231, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483512.55-166594210435335/source", "state": "file", "uid": 0} >2018-10-02 08:31:53,048 p=1004 u=mistral | TASK [Enabling the cleanup service] ******************************************** >2018-10-02 08:31:53,048 p=1004 u=mistral | Tuesday 02 October 2018 08:31:53 -0400 (0:00:00.593) 0:03:05.782 ******* >2018-10-02 08:31:53,080 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:53,107 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:53,404 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "enabled": true, "name": "neutron-cleanup", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "system.slice openvswitch.service basic.target systemd-journald.socket network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target docker.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "no", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Neutron cleanup on startup", "DevicePolicy": "auto", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/libexec/neutron-cleanup ; argv[]=/usr/libexec/neutron-cleanup ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/neutron-cleanup.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "neutron-cleanup.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "neutron-cleanup.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "yes", "RemainAfterExit": "no", "Requires": "basic.target", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-10-02 08:31:53,431 p=1004 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 08:31:53,432 p=1004 u=mistral | Tuesday 02 October 2018 08:31:53 -0400 (0:00:00.383) 0:03:06.165 ******* >2018-10-02 08:31:53,463 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:53,489 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:53,534 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >2018-10-02 08:31:53,559 p=1004 u=mistral | TASK [include_role] ************************************************************ >2018-10-02 08:31:53,559 p=1004 u=mistral | Tuesday 02 October 2018 08:31:53 -0400 (0:00:00.127) 0:03:06.292 ******* >2018-10-02 08:31:53,592 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:53,618 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:53,677 p=1004 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-10-02 08:31:53,677 p=1004 u=mistral | Tuesday 02 October 2018 08:31:53 -0400 (0:00:00.117) 0:03:06.410 ******* >2018-10-02 08:31:53,894 p=1004 u=mistral | changed: [compute-0] => {"changed": true} >2018-10-02 08:31:53,914 p=1004 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-10-02 08:31:53,914 p=1004 u=mistral | Tuesday 02 October 2018 08:31:53 -0400 (0:00:00.237) 0:03:06.648 ******* >2018-10-02 08:31:54,434 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-75.git8633870.el7_5.x86_64 providing docker is already installed"]} >2018-10-02 08:31:54,456 p=1004 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-10-02 08:31:54,456 p=1004 u=mistral | Tuesday 02 October 2018 08:31:54 -0400 (0:00:00.542) 0:03:07.190 ******* >2018-10-02 08:31:54,670 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:54,692 p=1004 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-10-02 08:31:54,692 p=1004 u=mistral | Tuesday 02 October 2018 08:31:54 -0400 (0:00:00.235) 0:03:07.426 ******* >2018-10-02 08:31:54,927 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-10-02 08:31:54,947 p=1004 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-10-02 08:31:54,948 p=1004 u=mistral | Tuesday 02 October 2018 08:31:54 -0400 (0:00:00.255) 0:03:07.681 ******* >2018-10-02 08:31:55,196 p=1004 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 08:31:55,216 p=1004 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-10-02 08:31:55,216 p=1004 u=mistral | Tuesday 02 October 2018 08:31:55 -0400 (0:00:00.268) 0:03:07.950 ******* >2018-10-02 08:31:55,454 p=1004 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-10-02 08:31:55,474 p=1004 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-10-02 08:31:55,474 p=1004 u=mistral | Tuesday 02 October 2018 08:31:55 -0400 (0:00:00.257) 0:03:08.207 ******* >2018-10-02 08:31:55,694 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:31:55,740 p=1004 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-10-02 08:31:55,740 p=1004 u=mistral | Tuesday 02 October 2018 08:31:55 -0400 (0:00:00.265) 0:03:08.473 ******* >2018-10-02 08:31:56,303 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483515.78-36186526253754/source", "state": "file", "uid": 0} >2018-10-02 08:31:56,325 p=1004 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-10-02 08:31:56,326 p=1004 u=mistral | Tuesday 02 October 2018 08:31:56 -0400 (0:00:00.585) 0:03:09.059 ******* >2018-10-02 08:31:56,622 p=1004 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 08:31:56,643 p=1004 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-10-02 08:31:56,643 p=1004 u=mistral | Tuesday 02 October 2018 08:31:56 -0400 (0:00:00.317) 0:03:09.376 ******* >2018-10-02 08:31:56,941 p=1004 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 08:31:57,005 p=1004 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-10-02 08:31:57,006 p=1004 u=mistral | Tuesday 02 October 2018 08:31:57 -0400 (0:00:00.362) 0:03:09.739 ******* >2018-10-02 08:31:57,230 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-10-02 08:31:57,253 p=1004 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-10-02 08:31:57,253 p=1004 u=mistral | Tuesday 02 October 2018 08:31:57 -0400 (0:00:00.247) 0:03:09.986 ******* >2018-10-02 08:31:57,276 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:31:57,278 p=1004 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-10-02 08:31:57,278 p=1004 u=mistral | Tuesday 02 October 2018 08:31:57 -0400 (0:00:00.024) 0:03:10.011 ******* >2018-10-02 08:31:57,526 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002375", "end": "2018-10-02 08:31:57.479669", "rc": 0, "start": "2018-10-02 08:31:57.477294", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} >2018-10-02 08:31:57,526 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >2018-10-02 08:31:57,527 p=1004 u=mistral | Tuesday 02 October 2018 08:31:57 -0400 (0:00:00.248) 0:03:10.260 ******* >2018-10-02 08:31:57,817 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "name": null, "status": {}} >2018-10-02 08:31:57,817 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >2018-10-02 08:31:57,818 p=1004 u=mistral | Tuesday 02 October 2018 08:31:57 -0400 (0:00:00.290) 0:03:10.551 ******* >2018-10-02 08:31:59,365 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "system.slice basic.target registries.service docker-storage-setup.service neutron-cleanup.service systemd-journald.socket rhel-push-plugin.socket network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service rhel-push-plugin.socket docker-cleanup.timer basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-10-02 08:31:59,367 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >2018-10-02 08:31:59,367 p=1004 u=mistral | Tuesday 02 October 2018 08:31:59 -0400 (0:00:01.549) 0:03:12.101 ******* >2018-10-02 08:31:59,428 p=1004 u=mistral | Pausing for 10 seconds >2018-10-02 08:31:59,428 p=1004 u=mistral | (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >2018-10-02 08:31:59,428 p=1004 u=mistral | [container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >2018-10-02 08:32:09,431 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-10-02 08:31:59.428199", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-10-02 08:32:09.428345", "user_input": ""} >2018-10-02 08:32:09,432 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >2018-10-02 08:32:09,432 p=1004 u=mistral | Tuesday 02 October 2018 08:32:09 -0400 (0:00:10.065) 0:03:22.166 ******* >2018-10-02 08:32:09,703 p=1004 u=mistral | changed: [compute-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.037443", "end": "2018-10-02 08:32:09.677476", "rc": 0, "start": "2018-10-02 08:32:09.640033", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} >2018-10-02 08:32:09,724 p=1004 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-10-02 08:32:09,724 p=1004 u=mistral | Tuesday 02 October 2018 08:32:09 -0400 (0:00:00.291) 0:03:22.457 ******* >2018-10-02 08:32:10,004 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Tue 2018-10-02 08:31:59 EDT", "ActiveEnterTimestampMonotonic": "389905801", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice basic.target registries.service docker-storage-setup.service neutron-cleanup.service systemd-journald.socket rhel-push-plugin.socket network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 08:31:58 EDT", "AssertTimestampMonotonic": "388720723", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 08:31:58 EDT", "ConditionTimestampMonotonic": "388720722", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "14687", "ExecMainStartTimestamp": "Tue 2018-10-02 08:31:58 EDT", "ExecMainStartTimestampMonotonic": "388721918", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Tue 2018-10-02 08:31:58 EDT] ; stop_time=[n/a] ; pid=14687 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 08:31:58 EDT", "InactiveExitTimestampMonotonic": "388721964", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "14687", "MemoryAccounting": "no", "MemoryCurrent": "67416064", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service rhel-push-plugin.socket docker-cleanup.timer basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "19", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Tue 2018-10-02 08:31:59 EDT", "WatchdogTimestampMonotonic": "389905681", "WatchdogUSec": "0"}} >2018-10-02 08:32:10,028 p=1004 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-10-02 08:32:10,029 p=1004 u=mistral | Tuesday 02 October 2018 08:32:10 -0400 (0:00:00.304) 0:03:22.762 ******* >2018-10-02 08:32:10,059 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:10,083 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:10,277 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"atime": 1538483513.3461378, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1537979153.5750983, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 2886261, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "1807870409", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-10-02 08:32:10,303 p=1004 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-10-02 08:32:10,303 p=1004 u=mistral | Tuesday 02 October 2018 08:32:10 -0400 (0:00:00.274) 0:03:23.036 ******* >2018-10-02 08:32:10,333 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:10,359 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:10,619 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Tue 2018-10-02 08:25:32 EDT", "ActiveEnterTimestampMonotonic": "3350179", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 08:25:32 EDT", "AssertTimestampMonotonic": "3349472", "Backlog": "128", "Before": "iscsid.service sockets.target shutdown.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 08:25:32 EDT", "ConditionTimestampMonotonic": "3349472", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 08:25:32 EDT", "InactiveExitTimestampMonotonic": "3350179", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "sockets.target", "Wants": "-.slice"}} >2018-10-02 08:32:10,644 p=1004 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 08:32:10,644 p=1004 u=mistral | Tuesday 02 October 2018 08:32:10 -0400 (0:00:00.341) 0:03:23.378 ******* >2018-10-02 08:32:10,674 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:10,698 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:10,884 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:10,909 p=1004 u=mistral | TASK [nova logs readme] ******************************************************** >2018-10-02 08:32:10,910 p=1004 u=mistral | Tuesday 02 October 2018 08:32:10 -0400 (0:00:00.265) 0:03:23.643 ******* >2018-10-02 08:32:10,940 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:10,966 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:11,412 p=1004 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-10-02 08:32:11,412 p=1004 u=mistral | ...ignoring >2018-10-02 08:32:11,436 p=1004 u=mistral | TASK [Mount Nova NFS Share] **************************************************** >2018-10-02 08:32:11,436 p=1004 u=mistral | Tuesday 02 October 2018 08:32:11 -0400 (0:00:00.526) 0:03:24.169 ******* >2018-10-02 08:32:11,464 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:11,489 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:11,502 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:11,526 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:32:11,526 p=1004 u=mistral | Tuesday 02 October 2018 08:32:11 -0400 (0:00:00.089) 0:03:24.259 ******* >2018-10-02 08:32:11,555 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:11,557 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/nova/instances) => {"changed": false, "item": "/var/lib/nova/instances", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:11,558 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:11,586 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:11,587 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova/instances) => {"changed": false, "item": "/var/lib/nova/instances", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:11,588 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:11,786 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/nova) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/nova", "mode": "0755", "owner": "root", "path": "/var/lib/nova", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:11,954 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/nova/instances) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/nova/instances", "mode": "0755", "owner": "root", "path": "/var/lib/nova/instances", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:12,118 p=1004 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-10-02 08:32:12,146 p=1004 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-10-02 08:32:12,146 p=1004 u=mistral | Tuesday 02 October 2018 08:32:12 -0400 (0:00:00.620) 0:03:24.880 ******* >2018-10-02 08:32:12,174 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,197 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,396 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:12,422 p=1004 u=mistral | TASK [is Instance HA enabled] ************************************************** >2018-10-02 08:32:12,422 p=1004 u=mistral | Tuesday 02 October 2018 08:32:12 -0400 (0:00:00.275) 0:03:25.155 ******* >2018-10-02 08:32:12,451 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,478 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,517 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"instance_ha_enabled": false}, "changed": false} >2018-10-02 08:32:12,540 p=1004 u=mistral | TASK [prepare Instance HA script directory] ************************************ >2018-10-02 08:32:12,540 p=1004 u=mistral | Tuesday 02 October 2018 08:32:12 -0400 (0:00:00.118) 0:03:25.274 ******* >2018-10-02 08:32:12,569 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,593 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,607 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,630 p=1004 u=mistral | TASK [install Instance HA script that runs nova-compute] *********************** >2018-10-02 08:32:12,630 p=1004 u=mistral | Tuesday 02 October 2018 08:32:12 -0400 (0:00:00.089) 0:03:25.363 ******* >2018-10-02 08:32:12,660 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,686 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,707 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,735 p=1004 u=mistral | TASK [Get list of instance HA compute nodes] *********************************** >2018-10-02 08:32:12,735 p=1004 u=mistral | Tuesday 02 October 2018 08:32:12 -0400 (0:00:00.105) 0:03:25.469 ******* >2018-10-02 08:32:12,766 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,793 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,809 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,834 p=1004 u=mistral | TASK [If instance HA is enabled on the node activate the evacuation completed check] *** >2018-10-02 08:32:12,834 p=1004 u=mistral | Tuesday 02 October 2018 08:32:12 -0400 (0:00:00.099) 0:03:25.568 ******* >2018-10-02 08:32:12,865 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,892 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,908 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,933 p=1004 u=mistral | TASK [create libvirt persistent data directories] ****************************** >2018-10-02 08:32:12,933 p=1004 u=mistral | Tuesday 02 October 2018 08:32:12 -0400 (0:00:00.098) 0:03:25.667 ******* >2018-10-02 08:32:12,966 p=1004 u=mistral | skipping: [controller-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,967 p=1004 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,968 p=1004 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,998 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:12,999 p=1004 u=mistral | skipping: [controller-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:13,001 p=1004 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:13,002 p=1004 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:13,003 p=1004 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:13,004 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:13,007 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:13,265 p=1004 u=mistral | ok: [compute-0] => (item=/etc/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt", "mode": "0700", "owner": "root", "path": "/etc/libvirt", "secontext": "system_u:object_r:virt_etc_t:s0", "size": 215, "state": "directory", "uid": 0} >2018-10-02 08:32:13,426 p=1004 u=mistral | ok: [compute-0] => (item=/etc/libvirt/secrets) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/secrets", "mode": "0700", "owner": "root", "path": "/etc/libvirt/secrets", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:13,590 p=1004 u=mistral | ok: [compute-0] => (item=/etc/libvirt/qemu) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/qemu", "mode": "0700", "owner": "root", "path": "/etc/libvirt/qemu", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 22, "state": "directory", "uid": 0} >2018-10-02 08:32:13,751 p=1004 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-10-02 08:32:13,907 p=1004 u=mistral | changed: [compute-0] => (item=/var/log/containers/libvirt) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/libvirt", "mode": "0755", "owner": "root", "path": "/var/log/containers/libvirt", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:13,932 p=1004 u=mistral | TASK [ensure qemu group is present on the host] ******************************** >2018-10-02 08:32:13,933 p=1004 u=mistral | Tuesday 02 October 2018 08:32:13 -0400 (0:00:00.999) 0:03:26.666 ******* >2018-10-02 08:32:13,962 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:13,988 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:14,247 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "name": "qemu", "state": "present", "system": false} >2018-10-02 08:32:14,273 p=1004 u=mistral | TASK [ensure qemu user is present on the host] ********************************* >2018-10-02 08:32:14,274 p=1004 u=mistral | Tuesday 02 October 2018 08:32:14 -0400 (0:00:00.341) 0:03:27.007 ******* >2018-10-02 08:32:14,304 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:14,330 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:14,794 p=1004 u=mistral | ok: [compute-0] => {"append": false, "changed": false, "comment": "qemu user", "group": 107, "home": "/", "move_home": false, "name": "qemu", "shell": "/sbin/nologin", "state": "present", "uid": 107} >2018-10-02 08:32:14,819 p=1004 u=mistral | TASK [create directory for vhost-user sockets with qemu ownership] ************* >2018-10-02 08:32:14,819 p=1004 u=mistral | Tuesday 02 October 2018 08:32:14 -0400 (0:00:00.545) 0:03:27.552 ******* >2018-10-02 08:32:14,849 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:14,875 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:15,094 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 107, "group": "qemu", "mode": "0755", "owner": "qemu", "path": "/var/lib/vhost_sockets", "secontext": "system_u:object_r:virt_cache_t:s0", "size": 6, "state": "directory", "uid": 107} >2018-10-02 08:32:15,120 p=1004 u=mistral | TASK [check if libvirt is installed] ******************************************* >2018-10-02 08:32:15,120 p=1004 u=mistral | Tuesday 02 October 2018 08:32:15 -0400 (0:00:00.301) 0:03:27.854 ******* >2018-10-02 08:32:15,152 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:15,179 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:15,422 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/usr/bin/rpm", "-q", "libvirt-daemon"], "delta": "0:00:00.040610", "end": "2018-10-02 08:32:15.402616", "failed_when_result": false, "rc": 0, "start": "2018-10-02 08:32:15.362006", "stderr": "", "stderr_lines": [], "stdout": "libvirt-daemon-3.9.0-14.el7_5.8.x86_64", "stdout_lines": ["libvirt-daemon-3.9.0-14.el7_5.8.x86_64"]} >2018-10-02 08:32:15,448 p=1004 u=mistral | TASK [make sure libvirt services are disabled] ********************************* >2018-10-02 08:32:15,448 p=1004 u=mistral | Tuesday 02 October 2018 08:32:15 -0400 (0:00:00.328) 0:03:28.182 ******* >2018-10-02 08:32:15,479 p=1004 u=mistral | skipping: [controller-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:15,486 p=1004 u=mistral | skipping: [controller-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:15,515 p=1004 u=mistral | skipping: [ceph-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:15,516 p=1004 u=mistral | skipping: [ceph-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:15,824 p=1004 u=mistral | changed: [compute-0] => (item=libvirtd.service) => {"changed": true, "enabled": false, "item": "libvirtd.service", "name": "libvirtd.service", "state": "stopped", "status": {"ActiveEnterTimestamp": "Tue 2018-10-02 08:25:34 EDT", "ActiveEnterTimestampMonotonic": "5365335", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "virtlockd.socket iscsid.service virtlogd.socket remote-fs.target dbus.service virtlockd.service network.target apparmor.service systemd-journald.socket basic.target system.slice local-fs.target virtlogd.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 08:25:34 EDT", "AssertTimestampMonotonic": "5132471", "Before": "multi-user.target shutdown.target libvirt-guests.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 08:25:34 EDT", "ConditionTimestampMonotonic": "5132471", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/libvirtd.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Virtualization daemon", "DevicePolicy": "auto", "Documentation": "man:libvirtd(8) https://libvirt.org", "EnvironmentFile": "/etc/sysconfig/libvirtd (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "1176", "ExecMainStartTimestamp": "Tue 2018-10-02 08:25:34 EDT", "ExecMainStartTimestampMonotonic": "5137123", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/libvirtd ; argv[]=/usr/sbin/libvirtd $LIBVIRTD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/libvirtd.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "libvirtd.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 08:25:34 EDT", "InactiveExitTimestampMonotonic": "5137173", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "8192", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "1176", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "libvirtd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "virtlogd.socket basic.target virtlockd.socket", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "32768", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "WantedBy": "libvirt-guests.service multi-user.target", "Wants": "system.slice", "WatchdogTimestamp": "Tue 2018-10-02 08:25:34 EDT", "WatchdogTimestampMonotonic": "5365288", "WatchdogUSec": "0"}} >2018-10-02 08:32:16,016 p=1004 u=mistral | changed: [compute-0] => (item=virtlogd.socket) => {"changed": true, "enabled": false, "item": "virtlogd.socket", "name": "virtlogd.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Tue 2018-10-02 08:25:32 EDT", "ActiveEnterTimestampMonotonic": "3346873", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "-.mount -.slice sysinit.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 08:25:32 EDT", "AssertTimestampMonotonic": "3345178", "Backlog": "128", "Before": "virtlogd.service sockets.target shutdown.target libvirtd.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 08:25:32 EDT", "ConditionTimestampMonotonic": "3345177", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Virtual machine log manager socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "FragmentPath": "/usr/lib/systemd/system/virtlogd.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "virtlogd.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 08:25:32 EDT", "InactiveExitTimestampMonotonic": "3346873", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "ListenStream": "/var/run/libvirt/virtlogd-sock", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "virtlogd.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "RequiredBy": "virtlogd.service libvirtd.service", "Requires": "-.mount sysinit.target", "RequiresMountsFor": "/var/run/libvirt/virtlogd-sock", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "virtlogd.service", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-10-02 08:32:16,044 p=1004 u=mistral | TASK [NTP settings] ************************************************************ >2018-10-02 08:32:16,044 p=1004 u=mistral | Tuesday 02 October 2018 08:32:16 -0400 (0:00:00.595) 0:03:28.777 ******* >2018-10-02 08:32:16,073 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:16,097 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:16,136 p=1004 u=mistral | ok: [compute-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["clock.redhat.com"]}, "changed": false} >2018-10-02 08:32:16,161 p=1004 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-10-02 08:32:16,161 p=1004 u=mistral | Tuesday 02 October 2018 08:32:16 -0400 (0:00:00.116) 0:03:28.894 ******* >2018-10-02 08:32:16,189 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:16,214 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:16,228 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:16,253 p=1004 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-10-02 08:32:16,253 p=1004 u=mistral | Tuesday 02 October 2018 08:32:16 -0400 (0:00:00.092) 0:03:28.987 ******* >2018-10-02 08:32:16,286 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:16,312 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,458 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["ntpdate", "-u", "clock.redhat.com"], "delta": "0:00:06.952979", "end": "2018-10-02 08:32:23.437871", "rc": 0, "start": "2018-10-02 08:32:16.484892", "stderr": "", "stderr_lines": [], "stdout": " 2 Oct 08:32:23 ntpdate[15192]: adjust time server 10.11.160.238 offset -0.002063 sec", "stdout_lines": [" 2 Oct 08:32:23 ntpdate[15192]: adjust time server 10.11.160.238 offset -0.002063 sec"]} >2018-10-02 08:32:23,485 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:32:23,485 p=1004 u=mistral | Tuesday 02 October 2018 08:32:23 -0400 (0:00:07.231) 0:03:36.218 ******* >2018-10-02 08:32:23,546 p=1004 u=mistral | skipping: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,547 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,549 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,550 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,567 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,575 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,603 p=1004 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-10-02 08:32:23,603 p=1004 u=mistral | Tuesday 02 October 2018 08:32:23 -0400 (0:00:00.117) 0:03:36.336 ******* >2018-10-02 08:32:23,634 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,661 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,674 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,701 p=1004 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-10-02 08:32:23,701 p=1004 u=mistral | Tuesday 02 October 2018 08:32:23 -0400 (0:00:00.098) 0:03:36.434 ******* >2018-10-02 08:32:23,730 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,758 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,770 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,796 p=1004 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-10-02 08:32:23,796 p=1004 u=mistral | Tuesday 02 October 2018 08:32:23 -0400 (0:00:00.094) 0:03:36.529 ******* >2018-10-02 08:32:23,824 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,851 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,863 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,888 p=1004 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-10-02 08:32:23,888 p=1004 u=mistral | Tuesday 02 October 2018 08:32:23 -0400 (0:00:00.092) 0:03:36.622 ******* >2018-10-02 08:32:23,917 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,944 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,958 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:23,983 p=1004 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-10-02 08:32:23,983 p=1004 u=mistral | Tuesday 02 October 2018 08:32:23 -0400 (0:00:00.094) 0:03:36.716 ******* >2018-10-02 08:32:24,013 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,039 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,057 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,087 p=1004 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 08:32:24,087 p=1004 u=mistral | Tuesday 02 October 2018 08:32:24 -0400 (0:00:00.104) 0:03:36.820 ******* >2018-10-02 08:32:24,120 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,147 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,160 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,186 p=1004 u=mistral | TASK [include_role] ************************************************************ >2018-10-02 08:32:24,186 p=1004 u=mistral | Tuesday 02 October 2018 08:32:24 -0400 (0:00:00.099) 0:03:36.920 ******* >2018-10-02 08:32:24,217 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,245 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,257 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,284 p=1004 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-10-02 08:32:24,284 p=1004 u=mistral | Tuesday 02 October 2018 08:32:24 -0400 (0:00:00.097) 0:03:37.017 ******* >2018-10-02 08:32:24,314 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,341 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,353 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,379 p=1004 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-10-02 08:32:24,379 p=1004 u=mistral | Tuesday 02 October 2018 08:32:24 -0400 (0:00:00.095) 0:03:37.112 ******* >2018-10-02 08:32:24,413 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,441 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,452 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,479 p=1004 u=mistral | TASK [NTP settings] ************************************************************ >2018-10-02 08:32:24,479 p=1004 u=mistral | Tuesday 02 October 2018 08:32:24 -0400 (0:00:00.099) 0:03:37.212 ******* >2018-10-02 08:32:24,510 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,537 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,549 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,576 p=1004 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-10-02 08:32:24,576 p=1004 u=mistral | Tuesday 02 October 2018 08:32:24 -0400 (0:00:00.097) 0:03:37.310 ******* >2018-10-02 08:32:24,608 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,634 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,649 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,675 p=1004 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-10-02 08:32:24,675 p=1004 u=mistral | Tuesday 02 October 2018 08:32:24 -0400 (0:00:00.098) 0:03:37.409 ******* >2018-10-02 08:32:24,706 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,737 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,750 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,777 p=1004 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 08:32:24,777 p=1004 u=mistral | Tuesday 02 October 2018 08:32:24 -0400 (0:00:00.102) 0:03:37.511 ******* >2018-10-02 08:32:24,809 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,837 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,850 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,877 p=1004 u=mistral | TASK [include_role] ************************************************************ >2018-10-02 08:32:24,877 p=1004 u=mistral | Tuesday 02 October 2018 08:32:24 -0400 (0:00:00.100) 0:03:37.611 ******* >2018-10-02 08:32:24,909 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,936 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,949 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:24,975 p=1004 u=mistral | TASK [NTP settings] ************************************************************ >2018-10-02 08:32:24,975 p=1004 u=mistral | Tuesday 02 October 2018 08:32:24 -0400 (0:00:00.097) 0:03:37.708 ******* >2018-10-02 08:32:25,006 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,033 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,051 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,080 p=1004 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-10-02 08:32:25,081 p=1004 u=mistral | Tuesday 02 October 2018 08:32:25 -0400 (0:00:00.105) 0:03:37.814 ******* >2018-10-02 08:32:25,113 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,139 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,157 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,183 p=1004 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-10-02 08:32:25,183 p=1004 u=mistral | Tuesday 02 October 2018 08:32:25 -0400 (0:00:00.102) 0:03:37.916 ******* >2018-10-02 08:32:25,213 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,241 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,254 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,280 p=1004 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 08:32:25,280 p=1004 u=mistral | Tuesday 02 October 2018 08:32:25 -0400 (0:00:00.097) 0:03:38.013 ******* >2018-10-02 08:32:25,312 p=1004 u=mistral | skipping: [controller-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,314 p=1004 u=mistral | skipping: [controller-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,315 p=1004 u=mistral | skipping: [controller-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,345 p=1004 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,346 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,354 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,362 p=1004 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,376 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,383 p=1004 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,410 p=1004 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-10-02 08:32:25,410 p=1004 u=mistral | Tuesday 02 October 2018 08:32:25 -0400 (0:00:00.130) 0:03:38.144 ******* >2018-10-02 08:32:25,445 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,514 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,529 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,556 p=1004 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-10-02 08:32:25,557 p=1004 u=mistral | Tuesday 02 October 2018 08:32:25 -0400 (0:00:00.146) 0:03:38.290 ******* >2018-10-02 08:32:25,588 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,616 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,629 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,656 p=1004 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-10-02 08:32:25,657 p=1004 u=mistral | Tuesday 02 October 2018 08:32:25 -0400 (0:00:00.100) 0:03:38.390 ******* >2018-10-02 08:32:25,687 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,713 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,726 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,750 p=1004 u=mistral | TASK [swift logs readme] ******************************************************* >2018-10-02 08:32:25,751 p=1004 u=mistral | Tuesday 02 October 2018 08:32:25 -0400 (0:00:00.094) 0:03:38.484 ******* >2018-10-02 08:32:25,778 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,802 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,820 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,846 p=1004 u=mistral | TASK [Check if rsyslog exists] ************************************************* >2018-10-02 08:32:25,846 p=1004 u=mistral | Tuesday 02 October 2018 08:32:25 -0400 (0:00:00.095) 0:03:38.579 ******* >2018-10-02 08:32:25,877 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,902 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,912 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,937 p=1004 u=mistral | TASK [Forward logging to swift.log file] *************************************** >2018-10-02 08:32:25,937 p=1004 u=mistral | Tuesday 02 October 2018 08:32:25 -0400 (0:00:00.091) 0:03:38.670 ******* >2018-10-02 08:32:25,965 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:25,991 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,003 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,026 p=1004 u=mistral | TASK [Restart rsyslogd service after logging conf change] ********************** >2018-10-02 08:32:26,026 p=1004 u=mistral | Tuesday 02 October 2018 08:32:26 -0400 (0:00:00.089) 0:03:38.759 ******* >2018-10-02 08:32:26,051 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,074 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,084 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,107 p=1004 u=mistral | TASK [Set fact for SwiftRawDisks] ********************************************** >2018-10-02 08:32:26,107 p=1004 u=mistral | Tuesday 02 October 2018 08:32:26 -0400 (0:00:00.081) 0:03:38.841 ******* >2018-10-02 08:32:26,138 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,164 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,174 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,196 p=1004 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-10-02 08:32:26,196 p=1004 u=mistral | Tuesday 02 October 2018 08:32:26 -0400 (0:00:00.088) 0:03:38.929 ******* >2018-10-02 08:32:26,247 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,259 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,284 p=1004 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-10-02 08:32:26,284 p=1004 u=mistral | Tuesday 02 October 2018 08:32:26 -0400 (0:00:00.088) 0:03:39.018 ******* >2018-10-02 08:32:26,331 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,343 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,364 p=1004 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 08:32:26,364 p=1004 u=mistral | Tuesday 02 October 2018 08:32:26 -0400 (0:00:00.080) 0:03:39.098 ******* >2018-10-02 08:32:26,399 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,442 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >2018-10-02 08:32:26,444 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,475 p=1004 u=mistral | TASK [include_role] ************************************************************ >2018-10-02 08:32:26,476 p=1004 u=mistral | Tuesday 02 October 2018 08:32:26 -0400 (0:00:00.110) 0:03:39.209 ******* >2018-10-02 08:32:26,510 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,550 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:26,596 p=1004 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-10-02 08:32:26,596 p=1004 u=mistral | Tuesday 02 October 2018 08:32:26 -0400 (0:00:00.120) 0:03:39.329 ******* >2018-10-02 08:32:26,819 p=1004 u=mistral | changed: [ceph-0] => {"changed": true} >2018-10-02 08:32:26,841 p=1004 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-10-02 08:32:26,842 p=1004 u=mistral | Tuesday 02 October 2018 08:32:26 -0400 (0:00:00.245) 0:03:39.575 ******* >2018-10-02 08:32:27,396 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-75.git8633870.el7_5.x86_64 providing docker is already installed"]} >2018-10-02 08:32:27,416 p=1004 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-10-02 08:32:27,416 p=1004 u=mistral | Tuesday 02 October 2018 08:32:27 -0400 (0:00:00.574) 0:03:40.149 ******* >2018-10-02 08:32:27,631 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:27,653 p=1004 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-10-02 08:32:27,653 p=1004 u=mistral | Tuesday 02 October 2018 08:32:27 -0400 (0:00:00.237) 0:03:40.387 ******* >2018-10-02 08:32:27,890 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-10-02 08:32:27,912 p=1004 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-10-02 08:32:27,912 p=1004 u=mistral | Tuesday 02 October 2018 08:32:27 -0400 (0:00:00.258) 0:03:40.645 ******* >2018-10-02 08:32:28,155 p=1004 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 08:32:28,176 p=1004 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-10-02 08:32:28,176 p=1004 u=mistral | Tuesday 02 October 2018 08:32:28 -0400 (0:00:00.264) 0:03:40.909 ******* >2018-10-02 08:32:28,440 p=1004 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-10-02 08:32:28,461 p=1004 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-10-02 08:32:28,461 p=1004 u=mistral | Tuesday 02 October 2018 08:32:28 -0400 (0:00:00.285) 0:03:41.195 ******* >2018-10-02 08:32:28,669 p=1004 u=mistral | changed: [ceph-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:28,729 p=1004 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-10-02 08:32:28,729 p=1004 u=mistral | Tuesday 02 October 2018 08:32:28 -0400 (0:00:00.268) 0:03:41.463 ******* >2018-10-02 08:32:29,329 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483548.78-73335969376746/source", "state": "file", "uid": 0} >2018-10-02 08:32:29,351 p=1004 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-10-02 08:32:29,351 p=1004 u=mistral | Tuesday 02 October 2018 08:32:29 -0400 (0:00:00.621) 0:03:42.085 ******* >2018-10-02 08:32:29,598 p=1004 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 08:32:29,618 p=1004 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-10-02 08:32:29,618 p=1004 u=mistral | Tuesday 02 October 2018 08:32:29 -0400 (0:00:00.266) 0:03:42.351 ******* >2018-10-02 08:32:29,858 p=1004 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 08:32:29,877 p=1004 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-10-02 08:32:29,877 p=1004 u=mistral | Tuesday 02 October 2018 08:32:29 -0400 (0:00:00.259) 0:03:42.611 ******* >2018-10-02 08:32:30,105 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-10-02 08:32:30,126 p=1004 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-10-02 08:32:30,127 p=1004 u=mistral | Tuesday 02 October 2018 08:32:30 -0400 (0:00:00.249) 0:03:42.860 ******* >2018-10-02 08:32:30,150 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:30,152 p=1004 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-10-02 08:32:30,152 p=1004 u=mistral | Tuesday 02 October 2018 08:32:30 -0400 (0:00:00.025) 0:03:42.885 ******* >2018-10-02 08:32:30,458 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002517", "end": "2018-10-02 08:32:30.411989", "rc": 0, "start": "2018-10-02 08:32:30.409472", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} >2018-10-02 08:32:30,458 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >2018-10-02 08:32:30,459 p=1004 u=mistral | Tuesday 02 October 2018 08:32:30 -0400 (0:00:00.306) 0:03:43.192 ******* >2018-10-02 08:32:30,888 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "name": null, "status": {}} >2018-10-02 08:32:30,889 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >2018-10-02 08:32:30,889 p=1004 u=mistral | Tuesday 02 October 2018 08:32:30 -0400 (0:00:00.430) 0:03:43.622 ******* >2018-10-02 08:32:32,607 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "network.target registries.service basic.target rhel-push-plugin.socket system.slice docker-storage-setup.service systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14903", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "rhel-push-plugin.socket registries.service basic.target docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-10-02 08:32:32,610 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >2018-10-02 08:32:32,610 p=1004 u=mistral | Tuesday 02 October 2018 08:32:32 -0400 (0:00:01.720) 0:03:45.343 ******* >2018-10-02 08:32:32,680 p=1004 u=mistral | Pausing for 10 seconds >2018-10-02 08:32:32,680 p=1004 u=mistral | (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >2018-10-02 08:32:32,680 p=1004 u=mistral | [container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >2018-10-02 08:32:42,684 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-10-02 08:32:32.679964", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-10-02 08:32:42.680155", "user_input": ""} >2018-10-02 08:32:42,684 p=1004 u=mistral | RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >2018-10-02 08:32:42,684 p=1004 u=mistral | Tuesday 02 October 2018 08:32:42 -0400 (0:00:10.074) 0:03:55.418 ******* >2018-10-02 08:32:42,977 p=1004 u=mistral | changed: [ceph-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.037145", "end": "2018-10-02 08:32:42.947837", "rc": 0, "start": "2018-10-02 08:32:42.910692", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} >2018-10-02 08:32:42,999 p=1004 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-10-02 08:32:42,999 p=1004 u=mistral | Tuesday 02 October 2018 08:32:42 -0400 (0:00:00.314) 0:03:55.733 ******* >2018-10-02 08:32:43,321 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Tue 2018-10-02 08:32:32 EDT", "ActiveEnterTimestampMonotonic": "423278764", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "network.target registries.service basic.target rhel-push-plugin.socket system.slice docker-storage-setup.service systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 08:32:31 EDT", "AssertTimestampMonotonic": "422050609", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 08:32:31 EDT", "ConditionTimestampMonotonic": "422050608", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "13910", "ExecMainStartTimestamp": "Tue 2018-10-02 08:32:31 EDT", "ExecMainStartTimestampMonotonic": "422052260", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Tue 2018-10-02 08:32:31 EDT] ; stop_time=[n/a] ; pid=13910 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 08:32:31 EDT", "InactiveExitTimestampMonotonic": "422052299", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14903", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "13910", "MemoryAccounting": "no", "MemoryCurrent": "64221184", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "rhel-push-plugin.socket registries.service basic.target docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "17", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Tue 2018-10-02 08:32:32 EDT", "WatchdogTimestampMonotonic": "423278694", "WatchdogUSec": "0"}} >2018-10-02 08:32:43,348 p=1004 u=mistral | TASK [NTP settings] ************************************************************ >2018-10-02 08:32:43,348 p=1004 u=mistral | Tuesday 02 October 2018 08:32:43 -0400 (0:00:00.349) 0:03:56.082 ******* >2018-10-02 08:32:43,378 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:43,417 p=1004 u=mistral | ok: [ceph-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["clock.redhat.com"]}, "changed": false} >2018-10-02 08:32:43,420 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:43,444 p=1004 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-10-02 08:32:43,445 p=1004 u=mistral | Tuesday 02 October 2018 08:32:43 -0400 (0:00:00.096) 0:03:56.178 ******* >2018-10-02 08:32:43,473 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:43,500 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:43,514 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:43,542 p=1004 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-10-02 08:32:43,543 p=1004 u=mistral | Tuesday 02 October 2018 08:32:43 -0400 (0:00:00.097) 0:03:56.276 ******* >2018-10-02 08:32:43,569 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:43,607 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:50,642 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": ["ntpdate", "-u", "clock.redhat.com"], "delta": "0:00:06.862597", "end": "2018-10-02 08:32:50.618875", "rc": 0, "start": "2018-10-02 08:32:43.756278", "stderr": "", "stderr_lines": [], "stdout": " 2 Oct 08:32:50 ntpdate[14037]: adjust time server 10.11.160.238 offset -0.001643 sec", "stdout_lines": [" 2 Oct 08:32:50 ntpdate[14037]: adjust time server 10.11.160.238 offset -0.001643 sec"]} >2018-10-02 08:32:50,649 p=1004 u=mistral | PLAY [External deployment step 1] ********************************************** >2018-10-02 08:32:50,667 p=1004 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-10-02 08:32:50,668 p=1004 u=mistral | Tuesday 02 October 2018 08:32:50 -0400 (0:00:07.125) 0:04:03.401 ******* >2018-10-02 08:32:50,698 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"blacklisted_hostnames": []}, "changed": false} >2018-10-02 08:32:50,712 p=1004 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-10-02 08:32:50,712 p=1004 u=mistral | Tuesday 02 October 2018 08:32:50 -0400 (0:00:00.044) 0:04:03.445 ******* >2018-10-02 08:32:50,883 p=1004 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": true, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "size": 6, "state": "directory", "uid": 42430} >2018-10-02 08:32:51,014 p=1004 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": true, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "size": 6, "state": "directory", "uid": 42430} >2018-10-02 08:32:51,151 p=1004 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": true, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "size": 6, "state": "directory", "uid": 42430} >2018-10-02 08:32:51,164 p=1004 u=mistral | TASK [generate inventory] ****************************************************** >2018-10-02 08:32:51,164 p=1004 u=mistral | Tuesday 02 October 2018 08:32:51 -0400 (0:00:00.452) 0:04:03.898 ******* >2018-10-02 08:32:51,785 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "49eddec87f5f0b216cfd1c56c2473fb70762554b", "dest": "/var/lib/mistral/overcloud/ceph-ansible/inventory.yml", "gid": 42430, "group": "mistral", "md5sum": "2131aad68fb2a9a47a1ceb8a4139d7a4", "mode": "0644", "owner": "mistral", "size": 526, "src": "/tmp/ansible-/ansible-tmp-1538483571.5-216011384787580/source", "state": "file", "uid": 42430} >2018-10-02 08:32:51,799 p=1004 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-10-02 08:32:51,800 p=1004 u=mistral | Tuesday 02 October 2018 08:32:51 -0400 (0:00:00.635) 0:04:04.533 ******* >2018-10-02 08:32:51,839 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_all": {"ceph_conf_overrides": {"global": {"osd_pool_default_pg_num": 32, "osd_pool_default_pgp_num": 32, "osd_pool_default_size": 1, "rgw_keystone_accepted_roles": "Member, admin", "rgw_keystone_admin_domain": "default", "rgw_keystone_admin_password": "QCxcxEleE6gqzEZGAy8kTIeiR", "rgw_keystone_admin_project": "service", "rgw_keystone_admin_user": "swift", "rgw_keystone_api_version": 3, "rgw_keystone_implicit_tenants": "true", "rgw_keystone_revocation_interval": "0", "rgw_keystone_url": "http://172.17.1.28:5000", "rgw_s3_auth_use_keystone": "true"}}, "ceph_docker_image": "rhceph", "ceph_docker_image_tag": "3-12", "ceph_docker_registry": "192.168.24.1:8787", "ceph_origin": "distro", "ceph_stable": true, "cluster": "ceph", "cluster_network": "172.17.4.0/24", "containerized_deployment": true, "docker": true, "fsid": "4398e5b0-c63c-11e8-b95a-525400c8bd81", "generate_fsid": false, "ip_version": "ipv4", "keys": [{"caps": {"mgr": "allow *", "mon": "profile rbd", "osd": "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"}, "key": "AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==", "mode": "0600", "name": "client.openstack"}, {"caps": {"mds": "allow *", "mgr": "allow *", "mon": "allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'", "osd": "allow rw"}, "key": "AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==", "mode": "0600", "name": "client.manila"}, {"caps": {"mgr": "allow *", "mon": "allow rw", "osd": "allow rwx"}, "key": "AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==", "mode": "0600", "name": "client.radosgw"}], "monitor_address_block": "172.17.3.0/24", "ntp_service_enabled": false, "openstack_config": true, "openstack_keys": [{"caps": {"mgr": "allow *", "mon": "profile rbd", "osd": "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"}, "key": "AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==", "mode": "0600", "name": "client.openstack"}, {"caps": {"mds": "allow *", "mgr": "allow *", "mon": "allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'", "osd": "allow rw"}, "key": "AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==", "mode": "0600", "name": "client.manila"}, {"caps": {"mgr": "allow *", "mon": "allow rw", "osd": "allow rwx"}, "key": "AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==", "mode": "0600", "name": "client.radosgw"}], "openstack_pools": [{"application": "rbd", "name": "images", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "openstack_gnocchi", "name": "metrics", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "backups", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "vms", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "volumes", "pg_num": 32, "rule_name": "replicated_rule"}], "pools": [], "public_network": "172.17.3.0/24", "user_config": true}}, "changed": false} >2018-10-02 08:32:51,860 p=1004 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-10-02 08:32:51,860 p=1004 u=mistral | Tuesday 02 October 2018 08:32:51 -0400 (0:00:00.060) 0:04:04.594 ******* >2018-10-02 08:32:52,190 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "5265076a6c75b0ddb0e34f1c2c9d55f682183dd4", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/all.yml", "gid": 42430, "group": "mistral", "md5sum": "bec1b2e8ad1c42519120e91b891dfd21", "mode": "0644", "owner": "mistral", "size": 3078, "src": "/tmp/ansible-/ansible-tmp-1538483571.91-128829949293296/source", "state": "file", "uid": 42430} >2018-10-02 08:32:52,207 p=1004 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-10-02 08:32:52,207 p=1004 u=mistral | Tuesday 02 October 2018 08:32:52 -0400 (0:00:00.346) 0:04:04.940 ******* >2018-10-02 08:32:52,241 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_extra_vars": {"fetch_directory": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "ireallymeanit": "yes"}}, "changed": false} >2018-10-02 08:32:52,256 p=1004 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-10-02 08:32:52,256 p=1004 u=mistral | Tuesday 02 October 2018 08:32:52 -0400 (0:00:00.048) 0:04:04.989 ******* >2018-10-02 08:32:52,580 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "736efc435c358cb150f966050ebc3ab5061819cb", "dest": "/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml", "gid": 42430, "group": "mistral", "md5sum": "2bc808d342a6452fceb69c11f7bc8c1e", "mode": "0644", "owner": "mistral", "size": 88, "src": "/tmp/ansible-/ansible-tmp-1538483572.29-24656561638590/source", "state": "file", "uid": 42430} >2018-10-02 08:32:52,594 p=1004 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-10-02 08:32:52,594 p=1004 u=mistral | Tuesday 02 October 2018 08:32:52 -0400 (0:00:00.338) 0:04:05.328 ******* >2018-10-02 08:32:52,923 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_data.json", "gid": 42430, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "size": 2, "src": "/tmp/ansible-/ansible-tmp-1538483572.63-123723219409351/source", "state": "file", "uid": 42430} >2018-10-02 08:32:52,938 p=1004 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-10-02 08:32:52,939 p=1004 u=mistral | Tuesday 02 October 2018 08:32:52 -0400 (0:00:00.344) 0:04:05.672 ******* >2018-10-02 08:32:53,257 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "6295759c7c940d5f447c8f2aa21ca4b89c07424a", "dest": "/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_playbook.yml", "gid": 42430, "group": "mistral", "md5sum": "3e3401cf992ddfe2f64ba89ba32d2941", "mode": "0644", "owner": "mistral", "size": 527, "src": "/tmp/ansible-/ansible-tmp-1538483572.97-258162496300421/source", "state": "file", "uid": 42430} >2018-10-02 08:32:53,271 p=1004 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-10-02 08:32:53,271 p=1004 u=mistral | Tuesday 02 October 2018 08:32:53 -0400 (0:00:00.332) 0:04:06.005 ******* >2018-10-02 08:32:53,290 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:53,303 p=1004 u=mistral | TASK [set ceph-ansible params from Heat] *************************************** >2018-10-02 08:32:53,303 p=1004 u=mistral | Tuesday 02 October 2018 08:32:53 -0400 (0:00:00.031) 0:04:06.037 ******* >2018-10-02 08:32:53,321 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:53,335 p=1004 u=mistral | TASK [set ceph-ansible playbooks] ********************************************** >2018-10-02 08:32:53,335 p=1004 u=mistral | Tuesday 02 October 2018 08:32:53 -0400 (0:00:00.031) 0:04:06.068 ******* >2018-10-02 08:32:53,352 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:53,365 p=1004 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-10-02 08:32:53,365 p=1004 u=mistral | Tuesday 02 October 2018 08:32:53 -0400 (0:00:00.030) 0:04:06.098 ******* >2018-10-02 08:32:53,387 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:53,400 p=1004 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-10-02 08:32:53,400 p=1004 u=mistral | Tuesday 02 October 2018 08:32:53 -0400 (0:00:00.035) 0:04:06.133 ******* >2018-10-02 08:32:53,419 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:53,433 p=1004 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-10-02 08:32:53,433 p=1004 u=mistral | Tuesday 02 October 2018 08:32:53 -0400 (0:00:00.032) 0:04:06.166 ******* >2018-10-02 08:32:53,462 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mgrs": {"ceph_mgr_docker_extra_env": "-e MGR_DASHBOARD=0"}}, "changed": false} >2018-10-02 08:32:53,475 p=1004 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-10-02 08:32:53,475 p=1004 u=mistral | Tuesday 02 October 2018 08:32:53 -0400 (0:00:00.042) 0:04:06.209 ******* >2018-10-02 08:32:53,796 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "06d130f3471f2ac09bb0161450878cf64bafd8af", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/mgrs.yml", "gid": 42430, "group": "mistral", "md5sum": "0d3c03a4186ad82120a728e0470a49d9", "mode": "0644", "owner": "mistral", "size": 46, "src": "/tmp/ansible-/ansible-tmp-1538483573.51-96429948480424/source", "state": "file", "uid": 42430} >2018-10-02 08:32:53,812 p=1004 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-10-02 08:32:53,812 p=1004 u=mistral | Tuesday 02 October 2018 08:32:53 -0400 (0:00:00.336) 0:04:06.545 ******* >2018-10-02 08:32:53,845 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mons": {"admin_secret": "AQBkYLNbAAAAABAAZaj/1kV4/FOi0ZBEaxPL1g==", "monitor_secret": "AQBkYLNbAAAAABAAPtOjxXjymErzGNcQab4sRQ=="}}, "changed": false} >2018-10-02 08:32:53,859 p=1004 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-10-02 08:32:53,860 p=1004 u=mistral | Tuesday 02 October 2018 08:32:53 -0400 (0:00:00.047) 0:04:06.593 ******* >2018-10-02 08:32:54,161 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "8902c5e22a09be21d37b1e5e2f4a9bfc88793ecd", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/mons.yml", "gid": 42430, "group": "mistral", "md5sum": "55666bf9dac86f90c22a4f24d83f5fbd", "mode": "0644", "owner": "mistral", "size": 112, "src": "/tmp/ansible-/ansible-tmp-1538483573.89-98697708888602/source", "state": "file", "uid": 42430} >2018-10-02 08:32:54,175 p=1004 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 08:32:54,175 p=1004 u=mistral | Tuesday 02 October 2018 08:32:54 -0400 (0:00:00.315) 0:04:06.909 ******* >2018-10-02 08:32:54,214 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"log_file": "tripleo-container-image-prepare.log"}, "changed": false} >2018-10-02 08:32:54,228 p=1004 u=mistral | TASK [Create temp file for prepare parameter] ********************************** >2018-10-02 08:32:54,228 p=1004 u=mistral | Tuesday 02 October 2018 08:32:54 -0400 (0:00:00.053) 0:04:06.962 ******* >2018-10-02 08:32:54,568 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "gid": 42430, "group": "mistral", "mode": "0600", "owner": "mistral", "path": "/tmp/ansible.uVELhL-prepare-param", "size": 0, "state": "file", "uid": 42430} >2018-10-02 08:32:54,586 p=1004 u=mistral | TASK [Create temp file for role data] ****************************************** >2018-10-02 08:32:54,586 p=1004 u=mistral | Tuesday 02 October 2018 08:32:54 -0400 (0:00:00.357) 0:04:07.319 ******* >2018-10-02 08:32:54,752 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "gid": 42430, "group": "mistral", "mode": "0600", "owner": "mistral", "path": "/tmp/ansible.8ggRt9-role-data", "size": 0, "state": "file", "uid": 42430} >2018-10-02 08:32:54,768 p=1004 u=mistral | TASK [Write ContainerImagePrepare parameter file] ****************************** >2018-10-02 08:32:54,768 p=1004 u=mistral | Tuesday 02 October 2018 08:32:54 -0400 (0:00:00.182) 0:04:07.501 ******* >2018-10-02 08:32:55,105 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "ee4783631076c19990a802865b8c0a3c25baeba1", "dest": "/tmp/ansible.uVELhL-prepare-param", "gid": 42430, "group": "mistral", "md5sum": "be85bccfbd1e18c6ab1a8370c364fe60", "mode": "0600", "owner": "mistral", "size": 11187, "src": "/tmp/ansible-/ansible-tmp-1538483574.8-217144945092167/source", "state": "file", "uid": 42430} >2018-10-02 08:32:55,120 p=1004 u=mistral | TASK [Write role data file] **************************************************** >2018-10-02 08:32:55,120 p=1004 u=mistral | Tuesday 02 October 2018 08:32:55 -0400 (0:00:00.351) 0:04:07.853 ******* >2018-10-02 08:32:55,454 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "f4bd6ad5174a88673a5da2c3b6c2de3827e06b7b", "dest": "/tmp/ansible.8ggRt9-role-data", "gid": 42430, "group": "mistral", "md5sum": "d3ae9b59dea6998091971def17a31a6a", "mode": "0600", "owner": "mistral", "size": 13059, "src": "/tmp/ansible-/ansible-tmp-1538483575.15-8910931331650/source", "state": "file", "uid": 42430} >2018-10-02 08:32:55,468 p=1004 u=mistral | TASK [Run tripleo-container-image-prepare] ************************************* >2018-10-02 08:32:55,468 p=1004 u=mistral | Tuesday 02 October 2018 08:32:55 -0400 (0:00:00.348) 0:04:08.202 ******* >2018-10-02 08:32:57,266 p=1004 u=mistral | [WARNING]: Consider using 'become', 'become_method', and 'become_user' rather >than running sudo > >2018-10-02 08:32:57,267 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "cmd": "sudo /usr/bin/tripleo-container-image-prepare --roles-file /tmp/ansible.8ggRt9-role-data --environment-file /tmp/ansible.uVELhL-prepare-param --cleanup partial 2> tripleo-container-image-prepare.log", "delta": "0:00:01.633888", "end": "2018-10-02 08:32:57.248181", "rc": 0, "start": "2018-10-02 08:32:55.614293", "stderr": "", "stderr_lines": [], "stdout": "null\n...", "stdout_lines": ["null", "..."]} >2018-10-02 08:32:57,282 p=1004 u=mistral | TASK [Delete param file] ******************************************************* >2018-10-02 08:32:57,282 p=1004 u=mistral | Tuesday 02 October 2018 08:32:57 -0400 (0:00:01.813) 0:04:10.015 ******* >2018-10-02 08:32:57,443 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "path": "/tmp/ansible.uVELhL-prepare-param", "state": "absent"} >2018-10-02 08:32:57,458 p=1004 u=mistral | TASK [Delete role file] ******************************************************** >2018-10-02 08:32:57,458 p=1004 u=mistral | Tuesday 02 October 2018 08:32:57 -0400 (0:00:00.176) 0:04:10.192 ******* >2018-10-02 08:32:57,632 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "path": "/tmp/ansible.8ggRt9-role-data", "state": "absent"} >2018-10-02 08:32:57,646 p=1004 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-10-02 08:32:57,646 p=1004 u=mistral | Tuesday 02 October 2018 08:32:57 -0400 (0:00:00.187) 0:04:10.380 ******* >2018-10-02 08:32:57,684 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_clients": {}}, "changed": false} >2018-10-02 08:32:57,698 p=1004 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-10-02 08:32:57,698 p=1004 u=mistral | Tuesday 02 October 2018 08:32:57 -0400 (0:00:00.052) 0:04:10.432 ******* >2018-10-02 08:32:58,024 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/clients.yml", "gid": 42430, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "size": 2, "src": "/tmp/ansible-/ansible-tmp-1538483577.73-131964123798745/source", "state": "file", "uid": 42430} >2018-10-02 08:32:58,040 p=1004 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-10-02 08:32:58,041 p=1004 u=mistral | Tuesday 02 October 2018 08:32:58 -0400 (0:00:00.342) 0:04:10.774 ******* >2018-10-02 08:32:58,073 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_osds": {"devices": ["/dev/vdb", "/dev/vdc", "/dev/vdd", "/dev/vde", "/dev/vdf"], "journal_size": 512, "osd_objectstore": "filestore", "osd_scenario": "collocated"}}, "changed": false} >2018-10-02 08:32:58,089 p=1004 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-10-02 08:32:58,089 p=1004 u=mistral | Tuesday 02 October 2018 08:32:58 -0400 (0:00:00.048) 0:04:10.823 ******* >2018-10-02 08:32:58,405 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "a209fd8d503be2b45dc87935a930c08a563088cb", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/osds.yml", "gid": 42430, "group": "mistral", "md5sum": "114fe63af169ecb1b28b951266282ba7", "mode": "0644", "owner": "mistral", "size": 134, "src": "/tmp/ansible-/ansible-tmp-1538483578.12-278331648311380/source", "state": "file", "uid": 42430} >2018-10-02 08:32:58,411 p=1004 u=mistral | PLAY [Overcloud deploy step tasks for 1] *************************************** >2018-10-02 08:32:58,418 p=1004 u=mistral | PLAY [Overcloud common deploy step tasks 1] ************************************ >2018-10-02 08:32:58,446 p=1004 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-10-02 08:32:58,446 p=1004 u=mistral | Tuesday 02 October 2018 08:32:58 -0400 (0:00:00.356) 0:04:11.179 ******* >2018-10-02 08:32:58,663 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:58,664 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:58,774 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:32:58,800 p=1004 u=mistral | TASK [Delete existing /var/lib/tripleo-config/check-mode directory for check mode] *** >2018-10-02 08:32:58,800 p=1004 u=mistral | Tuesday 02 October 2018 08:32:58 -0400 (0:00:00.354) 0:04:11.534 ******* >2018-10-02 08:32:58,830 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:58,855 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:58,867 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:58,892 p=1004 u=mistral | TASK [Create /var/lib/tripleo-config/check-mode directory for check mode] ****** >2018-10-02 08:32:58,892 p=1004 u=mistral | Tuesday 02 October 2018 08:32:58 -0400 (0:00:00.091) 0:04:11.625 ******* >2018-10-02 08:32:58,919 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:58,945 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:58,958 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:58,982 p=1004 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-10-02 08:32:58,982 p=1004 u=mistral | Tuesday 02 October 2018 08:32:58 -0400 (0:00:00.089) 0:04:11.715 ******* >2018-10-02 08:32:59,567 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "8cc2a8154fe8261f1ad4dbbf7151db6f5d016a04", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "ea4a5c9cd9eca53a460514b61dc3d011", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1631, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483579.03-92131956637965/source", "state": "file", "uid": 0} >2018-10-02 08:32:59,641 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "44355f328588ff032fb9d91a3fdf2a8f427f6ac1", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "d14bfa59823532755440579b4b515901", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1589, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483579.13-78472653824789/source", "state": "file", "uid": 0} >2018-10-02 08:32:59,654 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "0b7508ea11b5540c4e639bbb30162d8fa1fc1cc5", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "43135571b1950c38bbce98ace30272ac", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1641, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483579.15-263382810008853/source", "state": "file", "uid": 0} >2018-10-02 08:32:59,681 p=1004 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 08:32:59,681 p=1004 u=mistral | Tuesday 02 October 2018 08:32:59 -0400 (0:00:00.698) 0:04:12.414 ******* >2018-10-02 08:32:59,713 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:59,738 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:59,752 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:32:59,777 p=1004 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 08:32:59,777 p=1004 u=mistral | Tuesday 02 October 2018 08:32:59 -0400 (0:00:00.096) 0:04:12.511 ******* >2018-10-02 08:32:59,806 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:32:59,832 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:32:59,846 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:32:59,871 p=1004 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-10-02 08:32:59,871 p=1004 u=mistral | Tuesday 02 October 2018 08:32:59 -0400 (0:00:00.093) 0:04:12.604 ******* >2018-10-02 08:33:00,153 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-10-02 08:33:00,188 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-10-02 08:33:00,210 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-10-02 08:33:00,237 p=1004 u=mistral | TASK [Delete existing /var/lib/docker-puppet/check-mode for check mode] ******** >2018-10-02 08:33:00,237 p=1004 u=mistral | Tuesday 02 October 2018 08:33:00 -0400 (0:00:00.366) 0:04:12.971 ******* >2018-10-02 08:33:00,271 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:00,299 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:00,313 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:00,339 p=1004 u=mistral | TASK [Create /var/lib/docker-puppet/check-mode for check mode] ***************** >2018-10-02 08:33:00,339 p=1004 u=mistral | Tuesday 02 October 2018 08:33:00 -0400 (0:00:00.102) 0:04:13.073 ******* >2018-10-02 08:33:00,372 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:00,399 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:00,414 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:00,440 p=1004 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-10-02 08:33:00,440 p=1004 u=mistral | Tuesday 02 October 2018 08:33:00 -0400 (0:00:00.100) 0:04:13.174 ******* >2018-10-02 08:33:01,073 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "b3363e1a751a8a08f70b1cdcdb25fb401ca3ae14", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "2663d832240304f41aa83aa686212527", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 309, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483580.58-25572340370676/source", "state": "file", "uid": 0} >2018-10-02 08:33:01,136 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "84cd01e8e56b134f3242d2b61c139ff7cb5c4499", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "5e0fce94ac17c7c8e1e04aea47eca983", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2777, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483580.63-97051420220530/source", "state": "file", "uid": 0} >2018-10-02 08:33:01,148 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "c89e1b9f795e5727c7e181b2184927fb1c907aaa", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "ec6efda3b5bbb102a9ef3288d38138e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 15684, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483580.61-245139964883740/source", "state": "file", "uid": 0} >2018-10-02 08:33:01,172 p=1004 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 08:33:01,172 p=1004 u=mistral | Tuesday 02 October 2018 08:33:01 -0400 (0:00:00.731) 0:04:13.905 ******* >2018-10-02 08:33:01,202 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:01,227 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:01,240 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:01,264 p=1004 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 08:33:01,264 p=1004 u=mistral | Tuesday 02 October 2018 08:33:01 -0400 (0:00:00.092) 0:04:13.998 ******* >2018-10-02 08:33:01,293 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:33:01,317 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:33:01,329 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:33:01,354 p=1004 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-10-02 08:33:01,354 p=1004 u=mistral | Tuesday 02 October 2018 08:33:01 -0400 (0:00:00.089) 0:04:14.087 ******* >2018-10-02 08:33:01,561 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:33:01,594 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:33:01,620 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:33:01,647 p=1004 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-10-02 08:33:01,647 p=1004 u=mistral | Tuesday 02 October 2018 08:33:01 -0400 (0:00:00.293) 0:04:14.380 ******* >2018-10-02 08:33:01,858 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-10-02 08:33:01,882 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-10-02 08:33:01,920 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-10-02 08:33:01,948 p=1004 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-10-02 08:33:01,948 p=1004 u=mistral | Tuesday 02 October 2018 08:33:01 -0400 (0:00:00.301) 0:04:14.681 ******* >2018-10-02 08:33:02,510 p=1004 u=mistral | changed: [controller-0] => (item=create_swift_secret.sh) => {"changed": true, "checksum": "e77b96beec241bb84928d298a778521376225c0d", "dest": "/var/lib/docker-config-scripts/create_swift_secret.sh", "gid": 0, "group": "root", "item": ["create_swift_secret.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}], "md5sum": "9277d70c2fd62961998c5fce0a8aeee2", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1125, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483582.04-179175910692702/source", "state": "file", "uid": 0} >2018-10-02 08:33:02,602 p=1004 u=mistral | changed: [compute-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": true, "checksum": "72a319c9e7cf5c1343a0c92282d91569626d2bc2", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "md5sum": "48f516886d4b7523fff55b054d1b0457", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 599, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483582.1-279321378619520/source", "state": "file", "uid": 0} >2018-10-02 08:33:02,987 p=1004 u=mistral | changed: [controller-0] => (item=docker_puppet_apply.sh) => {"changed": true, "checksum": "93afaa6df42c9ead7768b295fa901f83ae1b39ef", "dest": "/var/lib/docker-config-scripts/docker_puppet_apply.sh", "gid": 0, "group": "root", "item": ["docker_puppet_apply.sh", {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}], "md5sum": "709b2caef95cc7486f9b851414e71133", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 653, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483582.54-44775307909138/source", "state": "file", "uid": 0} >2018-10-02 08:33:03,133 p=1004 u=mistral | changed: [compute-0] => (item=nova_statedir_ownership.py) => {"changed": true, "checksum": "052884875dafcd3e79ee18bebaed25f6994a1c37", "dest": "/var/lib/docker-config-scripts/nova_statedir_ownership.py", "gid": 0, "group": "root", "item": ["nova_statedir_ownership.py", {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}], "md5sum": "c8d51232f071c7b1fef053299a1b66c0", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6075, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483582.63-232876471061537/source", "state": "file", "uid": 0} >2018-10-02 08:33:03,483 p=1004 u=mistral | changed: [controller-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": true, "checksum": "72a319c9e7cf5c1343a0c92282d91569626d2bc2", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "md5sum": "48f516886d4b7523fff55b054d1b0457", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 599, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483583.02-244693030285371/source", "state": "file", "uid": 0} >2018-10-02 08:33:03,980 p=1004 u=mistral | changed: [controller-0] => (item=nova_api_discover_hosts.sh) => {"changed": true, "checksum": "4e350e3d48cba294f2ccab34eb03c1dee23e7f82", "dest": "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh", "gid": 0, "group": "root", "item": ["nova_api_discover_hosts.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}], "md5sum": "ed5dca102b28b4f992943612dee2dced", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483583.51-249311963071576/source", "state": "file", "uid": 0} >2018-10-02 08:33:04,478 p=1004 u=mistral | changed: [controller-0] => (item=nova_api_ensure_default_cell.sh) => {"changed": true, "checksum": "0a839197c2fa15204014befb1c771a17aea5bdd1", "dest": "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh", "gid": 0, "group": "root", "item": ["nova_api_ensure_default_cell.sh", {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}], "md5sum": "12a4a82656ddaae342942097b752d9db", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 442, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483584.01-69416156244915/source", "state": "file", "uid": 0} >2018-10-02 08:33:04,994 p=1004 u=mistral | changed: [controller-0] => (item=set_swift_keymaster_key_id.sh) => {"changed": true, "checksum": "9c2474fa6e4a8869674b689206eb1a1658a28fc6", "dest": "/var/lib/docker-config-scripts/set_swift_keymaster_key_id.sh", "gid": 0, "group": "root", "item": ["set_swift_keymaster_key_id.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}], "md5sum": "054225f8957e4457ef2103ce24d44b04", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1275, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483584.51-195171590941323/source", "state": "file", "uid": 0} >2018-10-02 08:33:05,025 p=1004 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-10-02 08:33:05,025 p=1004 u=mistral | Tuesday 02 October 2018 08:33:05 -0400 (0:00:03.076) 0:04:17.758 ******* >2018-10-02 08:33:05,086 p=1004 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,093 p=1004 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,102 p=1004 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,113 p=1004 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,117 p=1004 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,124 p=1004 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,130 p=1004 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,135 p=1004 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,136 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,141 p=1004 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,143 p=1004 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,149 p=1004 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,153 p=1004 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,160 p=1004 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,165 p=1004 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,169 p=1004 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,176 p=1004 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,177 p=1004 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,184 p=1004 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,192 p=1004 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,194 p=1004 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,220 p=1004 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-10-02 08:33:05,221 p=1004 u=mistral | Tuesday 02 October 2018 08:33:05 -0400 (0:00:00.195) 0:04:17.954 ******* >2018-10-02 08:33:05,313 p=1004 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,403 p=1004 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,894 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:33:05,921 p=1004 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-10-02 08:33:05,922 p=1004 u=mistral | Tuesday 02 October 2018 08:33:05 -0400 (0:00:00.701) 0:04:18.655 ******* >2018-10-02 08:33:06,483 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "2ee7eba951b9539c7cf5587358f3d1f924eb1054", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "65035d0956f8abda7cecaf08a6f062f8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 152634, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483585.98-41211231574173/source", "state": "file", "uid": 0} >2018-10-02 08:33:06,514 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "e18ade306ec73767bc37d9997f5a6c043e08ae9a", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "13ae8ed298be30c0b0e40e4f4956b7e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1477, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483586.01-12978921851406/source", "state": "file", "uid": 0} >2018-10-02 08:33:06,545 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "47e6b90cf133abcc759ebca645ccd7f04261545f", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "c46a2b4f258d2872a490e4259933884f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 17511, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483586.04-215184291631247/source", "state": "file", "uid": 0} >2018-10-02 08:33:06,572 p=1004 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-10-02 08:33:06,572 p=1004 u=mistral | Tuesday 02 October 2018 08:33:06 -0400 (0:00:00.650) 0:04:19.306 ******* >2018-10-02 08:33:07,172 p=1004 u=mistral | changed: [ceph-0] => (item=step_1) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": ["step_1", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483586.65-65289207976451/source", "state": "file", "uid": 0} >2018-10-02 08:33:07,194 p=1004 u=mistral | changed: [controller-0] => (item=step_1) => {"changed": true, "checksum": "bc58a399137e67c680429c5a172a695049bc5ee4", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": ["step_1", {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=mG0FjSjrDN8mWwf9YJSsEJGuQ", "DB_ROOT_PASSWORD=5BSzxzKG9a"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=fbxKGjRmnA14UIbGdAmW"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}], "md5sum": "6254b603ec9b76635de5e7cc8ec526e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9190, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483586.69-30231990536162/source", "state": "file", "uid": 0} >2018-10-02 08:33:07,211 p=1004 u=mistral | changed: [compute-0] => (item=step_1) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": ["step_1", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483586.7-111675456183565/source", "state": "file", "uid": 0} >2018-10-02 08:33:07,678 p=1004 u=mistral | changed: [ceph-0] => (item=step_2) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": ["step_2", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483587.18-226328059090906/source", "state": "file", "uid": 0} >2018-10-02 08:33:07,719 p=1004 u=mistral | changed: [compute-0] => (item=step_2) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": ["step_2", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483587.22-268959211020346/source", "state": "file", "uid": 0} >2018-10-02 08:33:07,728 p=1004 u=mistral | changed: [controller-0] => (item=step_2) => {"changed": true, "checksum": "f919bfdf16735356aa9b95be115c795b92ce8eca", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": ["step_2", {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}], "md5sum": "8996a74b29cfc63335a9909c28917ec1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 22855, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483587.21-240911163335443/source", "state": "file", "uid": 0} >2018-10-02 08:33:08,196 p=1004 u=mistral | changed: [ceph-0] => (item=step_3) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": ["step_3", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483587.69-253867369798699/source", "state": "file", "uid": 0} >2018-10-02 08:33:08,258 p=1004 u=mistral | changed: [controller-0] => (item=step_3) => {"changed": true, "checksum": "25119311f9f4f1f313da1a7026c1ade80dd8da11", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": ["step_3", {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "Q4TKZfrksKpvC1QXOQA8ciL7S"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "user": "root", "volumes": ["/srv/node:/srv/node"]}}], "md5sum": "defa48e175322c689a940b9467902b34", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 29101, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483587.74-135194447078589/source", "state": "file", "uid": 0} >2018-10-02 08:33:08,263 p=1004 u=mistral | changed: [compute-0] => (item=step_3) => {"changed": true, "checksum": "4a96db65f846bf6c09dc1fcb89bb5bad098ff3e1", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": ["step_3", {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}], "md5sum": "c685d2413ae9ceb911d50139c1a8d8d1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 7208, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483587.73-160505921294424/source", "state": "file", "uid": 0} >2018-10-02 08:33:08,683 p=1004 u=mistral | changed: [ceph-0] => (item=step_4) => {"changed": true, "checksum": "b4026aa009bb07e185a7d24fc6ae29313522e7ca", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": ["step_4", {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}], "md5sum": "c25ae9212c604d8902701f31742ce214", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483588.21-159863424664341/source", "state": "file", "uid": 0} >2018-10-02 08:33:08,818 p=1004 u=mistral | changed: [controller-0] => (item=step_4) => {"changed": true, "checksum": "f0ef7eafc400be0c7bef94d894bcdc91c90f877d", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": ["step_4", {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}], "md5sum": "e09b84d266c6504660080a51f2197cb8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 60195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483588.27-180964454369201/source", "state": "file", "uid": 0} >2018-10-02 08:33:08,825 p=1004 u=mistral | changed: [compute-0] => (item=step_4) => {"changed": true, "checksum": "c9336e49f241d8245859d2d8a7a89600524b4bab", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": ["step_4", {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '4398e5b0-c63c-11e8-b95a-525400c8bd81' --base64 'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}], "md5sum": "60ae00a8c7bd0b5d87a2eef258c54629", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8816, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483588.27-1823415169726/source", "state": "file", "uid": 0} >2018-10-02 08:33:09,165 p=1004 u=mistral | changed: [ceph-0] => (item=step_5) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": ["step_5", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483588.69-77738940771910/source", "state": "file", "uid": 0} >2018-10-02 08:33:09,325 p=1004 u=mistral | changed: [controller-0] => (item=step_5) => {"changed": true, "checksum": "9c28452e8506a4083a5c821f2d2815412d5aa326", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": ["step_5", {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_api_online_migrations": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db online_data_migrations'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}, "nova_online_migrations": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db online_data_migrations'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}}], "md5sum": "0f753507f365a66b5db2f1eedffbf750", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19124, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483588.82-14371849642260/source", "state": "file", "uid": 0} >2018-10-02 08:33:09,360 p=1004 u=mistral | changed: [compute-0] => (item=step_5) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": ["step_5", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483588.82-136250574952173/source", "state": "file", "uid": 0} >2018-10-02 08:33:09,657 p=1004 u=mistral | changed: [ceph-0] => (item=step_6) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": ["step_6", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483589.18-28046278659118/source", "state": "file", "uid": 0} >2018-10-02 08:33:09,830 p=1004 u=mistral | changed: [controller-0] => (item=step_6) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": ["step_6", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483589.33-229161230480533/source", "state": "file", "uid": 0} >2018-10-02 08:33:09,887 p=1004 u=mistral | changed: [compute-0] => (item=step_6) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": ["step_6", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483589.37-93550980427716/source", "state": "file", "uid": 0} >2018-10-02 08:33:09,918 p=1004 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-10-02 08:33:09,918 p=1004 u=mistral | Tuesday 02 October 2018 08:33:09 -0400 (0:00:03.345) 0:04:22.652 ******* >2018-10-02 08:33:10,119 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:33:10,203 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:33:10,221 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 08:33:10,249 p=1004 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-10-02 08:33:10,249 p=1004 u=mistral | Tuesday 02 October 2018 08:33:10 -0400 (0:00:00.330) 0:04:22.982 ******* >2018-10-02 08:33:10,806 p=1004 u=mistral | changed: [ceph-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": true, "checksum": "e05e847d3096659560f83aa3fcb0ef1d15168e8e", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "6a997b9e6deb0e043397bf22a50004d4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483590.34-122084376890670/source", "state": "file", "uid": 0} >2018-10-02 08:33:10,905 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_compute.json) => {"changed": true, "checksum": "76874c2f28ef848007e675a4b52d67ff252c4cf1", "dest": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/ceilometer_agent_compute.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "4a3ce71cb7b5b699dcbd2ca937e5ea7c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 323, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483590.38-20130371572234/source", "state": "file", "uid": 0} >2018-10-02 08:33:11,046 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/aodh_api.json) => {"changed": true, "checksum": "7eddb177fe0e9635a939871db86a4cef04690de6", "dest": "/var/lib/kolla/config_files/aodh_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/aodh_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "md5sum": "3cd09d6b656982376119207e483b6aee", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 403, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483590.52-15155791222945/source", "state": "file", "uid": 0} >2018-10-02 08:33:11,433 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": true, "checksum": "d310c205955d0f5d508329bf624cbe8345535c34", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "md5sum": "22ef322b4a91ebca32ec0dd9c41be102", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483590.91-2235269335327/source", "state": "file", "uid": 0} >2018-10-02 08:33:11,546 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/aodh_evaluator.json) => {"changed": true, "checksum": "01aea38e8d76afa53499dc261de8b66faadc5ff8", "dest": "/var/lib/kolla/config_files/aodh_evaluator.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/aodh_evaluator.json", {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "md5sum": "b4dfbf9ca1823ec2828eb3c2b4dc6126", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 398, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483591.05-48280014307628/source", "state": "file", "uid": 0} >2018-10-02 08:33:11,953 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": true, "checksum": "e05e847d3096659560f83aa3fcb0ef1d15168e8e", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "6a997b9e6deb0e043397bf22a50004d4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483591.44-233035869527961/source", "state": "file", "uid": 0} >2018-10-02 08:33:12,017 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/aodh_listener.json) => {"changed": true, "checksum": "f1bb3c5d81fed87f945e29bbb59dbc822fe154ec", "dest": "/var/lib/kolla/config_files/aodh_listener.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/aodh_listener.json", {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "md5sum": "165c9900e4df3de03c25903072139acf", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483591.55-41900317731399/source", "state": "file", "uid": 0} >2018-10-02 08:33:12,487 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": true, "checksum": "297543dc37af33605befea77ef4a371f0a6a3662", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "md5sum": "51a8878fe08bb182bee7ac73da2e17d3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 414, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483591.96-18964350680658/source", "state": "file", "uid": 0} >2018-10-02 08:33:12,532 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/aodh_notifier.json) => {"changed": true, "checksum": "3524989e2f062b628ff39bfa1826a299e9e87643", "dest": "/var/lib/kolla/config_files/aodh_notifier.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/aodh_notifier.json", {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "md5sum": "72ee81994099352750394944a4944691", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483592.03-39336619281540/source", "state": "file", "uid": 0} >2018-10-02 08:33:13,004 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/nova-migration-target.json) => {"changed": true, "checksum": "5ebfe90d3d5db802ffc11e62806a1c471e899f42", "dest": "/var/lib/kolla/config_files/nova-migration-target.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova-migration-target.json", {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}], "md5sum": "3a6d1baa3e960be9487b87e96286b82f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 414, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483592.5-26730060421331/source", "state": "file", "uid": 0} >2018-10-02 08:33:13,024 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_central.json) => {"changed": true, "checksum": "33088791c573ef63b952f0f1fde999b995c207f2", "dest": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/ceilometer_agent_central.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "225cd56e124ed8119b457e8966d0f1e5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 323, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483592.54-48612710956007/source", "state": "file", "uid": 0} >2018-10-02 08:33:13,527 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/nova_compute.json) => {"changed": true, "checksum": "6afeee3c19010437bf1ccc38749ac6c0b96cc70a", "dest": "/var/lib/kolla/config_files/nova_compute.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_compute.json", {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "430d079a841e830bd7f78bb526583b96", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 927, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483593.01-51772489392053/source", "state": "file", "uid": 0} >2018-10-02 08:33:13,528 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_notification.json) => {"changed": true, "checksum": "60eec3e718b294ae05e52da14f2db42a06fb93a9", "dest": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/ceilometer_agent_notification.json", {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}], "md5sum": "2f01e419ebdad98b2d5e49b94c8c980e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483593.03-119892767044728/source", "state": "file", "uid": 0} >2018-10-02 08:33:14,025 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api.json) => {"changed": true, "checksum": "74ef43c5be2146af6ac8aec7c636329654b98cb4", "dest": "/var/lib/kolla/config_files/cinder_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/cinder_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "md5sum": "edb991e706ddfdf46e5953dd9dd50f20", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 409, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483593.54-271865662540510/source", "state": "file", "uid": 0} >2018-10-02 08:33:14,035 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/nova_libvirt.json) => {"changed": true, "checksum": "65ab6d1486d27536bef71d729d69d5a4e1ed39cc", "dest": "/var/lib/kolla/config_files/nova_libvirt.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_libvirt.json", {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "122a85849fd7331643c266d1c06aa44e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 818, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483593.54-49927572412490/source", "state": "file", "uid": 0} >2018-10-02 08:33:14,511 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api_cron.json) => {"changed": true, "checksum": "cf9eab2e83b0ed617d39b36638b9dbbaed31f675", "dest": "/var/lib/kolla/config_files/cinder_api_cron.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/cinder_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "md5sum": "4c96926f14f7c02894093b15f77f66ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 399, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483594.03-118871139662679/source", "state": "file", "uid": 0} >2018-10-02 08:33:14,524 p=1004 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/nova_virtlogd.json) => {"changed": true, "checksum": "75ebc27be03214be0291f0ed5776b9d9c05b1773", "dest": "/var/lib/kolla/config_files/nova_virtlogd.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_virtlogd.json", {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "1971e50723b046a7c66a1ecc7635dc67", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 279, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483594.04-116389502284431/source", "state": "file", "uid": 0} >2018-10-02 08:33:14,988 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/cinder_backup.json) => {"changed": true, "checksum": "fc28ba7bb64dda776da4fb6b65ab4cce58c55043", "dest": "/var/lib/kolla/config_files/cinder_backup.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/cinder_backup.json", {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "md5sum": "9dc8348aa5d9c1399e5ec9b9a8bf39a5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1001, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483594.52-94254912876707/source", "state": "file", "uid": 0} >2018-10-02 08:33:15,464 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/cinder_scheduler.json) => {"changed": true, "checksum": "8247ea37983cee31da341830b5a7351da4f55bb6", "dest": "/var/lib/kolla/config_files/cinder_scheduler.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/cinder_scheduler.json", {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "md5sum": "4d54f110f3905ea3ab1eeca28c8a20f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 493, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483595.0-265905804249113/source", "state": "file", "uid": 0} >2018-10-02 08:33:15,932 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/cinder_volume.json) => {"changed": true, "checksum": "9a45013e6489f8e1a4b26ce2bac479740b72a291", "dest": "/var/lib/kolla/config_files/cinder_volume.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/cinder_volume.json", {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "md5sum": "9087654f4a760bf3dc681aaa4ac80b46", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 872, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483595.47-255894262484648/source", "state": "file", "uid": 0} >2018-10-02 08:33:16,394 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/clustercheck.json) => {"changed": true, "checksum": "498341e7e5d08339f5a407a871691f38aeb88160", "dest": "/var/lib/kolla/config_files/clustercheck.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/clustercheck.json", {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "575af5380cd86d03642aec48e0b09839", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 251, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483595.94-165375938138659/source", "state": "file", "uid": 0} >2018-10-02 08:33:16,858 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/glance_api.json) => {"changed": true, "checksum": "1b1c2ce62e71e24ba6e806ac1fa0a25f9bac02bc", "dest": "/var/lib/kolla/config_files/glance_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/glance_api.json", {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "d84278d9f994dc2e8aeae1544bf1ff9e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 836, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483596.4-161223117238755/source", "state": "file", "uid": 0} >2018-10-02 08:33:17,315 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/glance_api_tls_proxy.json) => {"changed": true, "checksum": "20bba94ac1ce7afb7fd0793567a9fe48300d1a15", "dest": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/glance_api_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "bda59eb8d2adeb0f47b803f83819cb93", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483596.87-202565235382172/source", "state": "file", "uid": 0} >2018-10-02 08:33:17,780 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_api.json) => {"changed": true, "checksum": "398476f1850153ccbdec3645eb518301076734d3", "dest": "/var/lib/kolla/config_files/gnocchi_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/gnocchi_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "46345cb5377a2113e5df6f3a55609501", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 755, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483597.32-191790629504993/source", "state": "file", "uid": 0} >2018-10-02 08:33:18,241 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_db_sync.json) => {"changed": true, "checksum": "6d8f6ad47b0adea396ec88bc87650c3b37f95b29", "dest": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/gnocchi_db_sync.json", {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "7cfa51bbfe45f59f2687e86208b2cd32", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 811, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483597.79-49952080452614/source", "state": "file", "uid": 0} >2018-10-02 08:33:18,691 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_metricd.json) => {"changed": true, "checksum": "2c277290410059b85904555394495ff85e713585", "dest": "/var/lib/kolla/config_files/gnocchi_metricd.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/gnocchi_metricd.json", {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "a0d142d2edc479ff4164aa5a354d45c2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 751, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483598.25-251974728349673/source", "state": "file", "uid": 0} >2018-10-02 08:33:19,146 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_statsd.json) => {"changed": true, "checksum": "bc9cdf4be4f10268a8921bf7f955044bca40a6d7", "dest": "/var/lib/kolla/config_files/gnocchi_statsd.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/gnocchi_statsd.json", {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "ed7d62ee2974152d4f7ad928ecffffa3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 750, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483598.7-71765329280631/source", "state": "file", "uid": 0} >2018-10-02 08:33:19,604 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/haproxy.json) => {"changed": true, "checksum": "9a4b9d1d7f16f7bf07f22ea58e51305a17651991", "dest": "/var/lib/kolla/config_files/haproxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/haproxy.json", {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}], "md5sum": "df253cc0124ec5e92113b43d8f45a1bd", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1037, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483599.16-91229792352327/source", "state": "file", "uid": 0} >2018-10-02 08:33:20,063 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/heat_api.json) => {"changed": true, "checksum": "d8ba895b605f2f569f938611610bd87d4c0c1843", "dest": "/var/lib/kolla/config_files/heat_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/heat_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "md5sum": "73c8da5dcb124ae745f0dafdecb759fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 403, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483599.61-69698917503573/source", "state": "file", "uid": 0} >2018-10-02 08:33:20,524 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cfn.json) => {"changed": true, "checksum": "d8ba895b605f2f569f938611610bd87d4c0c1843", "dest": "/var/lib/kolla/config_files/heat_api_cfn.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/heat_api_cfn.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "md5sum": "73c8da5dcb124ae745f0dafdecb759fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 403, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483600.07-271084570978484/source", "state": "file", "uid": 0} >2018-10-02 08:33:20,998 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cron.json) => {"changed": true, "checksum": "3094b61a55d29dfe193b697638c9a1225a2eab4b", "dest": "/var/lib/kolla/config_files/heat_api_cron.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/heat_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "md5sum": "b872ad178d48140e84acf295deb896b1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 393, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483600.53-13075869421407/source", "state": "file", "uid": 0} >2018-10-02 08:33:21,480 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/heat_engine.json) => {"changed": true, "checksum": "da38d4d29e5f3b6754fd147b5e4ce08867367b4f", "dest": "/var/lib/kolla/config_files/heat_engine.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/heat_engine.json", {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "md5sum": "d9d073e1d28f19dae913d1198a17461b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 475, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483601.01-216954800247810/source", "state": "file", "uid": 0} >2018-10-02 08:33:21,967 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/horizon.json) => {"changed": true, "checksum": "10b4664bce96ab9dbf9a249322506726643d22b9", "dest": "/var/lib/kolla/config_files/horizon.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/horizon.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}], "md5sum": "50f5bff449ad137aa0772554002a49fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 911, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483601.49-187242487498857/source", "state": "file", "uid": 0} >2018-10-02 08:33:22,403 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": true, "checksum": "d310c205955d0f5d508329bf624cbe8345535c34", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "md5sum": "22ef322b4a91ebca32ec0dd9c41be102", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483601.97-270416582197824/source", "state": "file", "uid": 0} >2018-10-02 08:33:22,861 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/keystone.json) => {"changed": true, "checksum": "20bba94ac1ce7afb7fd0793567a9fe48300d1a15", "dest": "/var/lib/kolla/config_files/keystone.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/keystone.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "bda59eb8d2adeb0f47b803f83819cb93", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483602.41-13420874801450/source", "state": "file", "uid": 0} >2018-10-02 08:33:23,301 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/keystone_cron.json) => {"changed": true, "checksum": "d445d71ded9217fe930e649813e1dcf19f36271a", "dest": "/var/lib/kolla/config_files/keystone_cron.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/keystone_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}], "md5sum": "aebc5c71b140992f2e480d6f98cf0957", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 405, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483602.87-23496173502067/source", "state": "file", "uid": 0} >2018-10-02 08:33:23,759 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": true, "checksum": "e05e847d3096659560f83aa3fcb0ef1d15168e8e", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "6a997b9e6deb0e043397bf22a50004d4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483603.31-159138370886617/source", "state": "file", "uid": 0} >2018-10-02 08:33:24,229 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/mysql.json) => {"changed": true, "checksum": "16d384bc3e0d8580a0d746eedecd5375f23ba9f6", "dest": "/var/lib/kolla/config_files/mysql.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/mysql.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}], "md5sum": "3c91849f4fcf4c2188667e6ed5db2a57", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1133, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483603.77-7970568770324/source", "state": "file", "uid": 0} >2018-10-02 08:33:24,681 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_api.json) => {"changed": true, "checksum": "72ccad463ca9cf6403c76cb32ab9a2a7b929d0ac", "dest": "/var/lib/kolla/config_files/neutron_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_api.json", {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "md5sum": "ec071a9599838390074469cf52d6616b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 702, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483604.24-109430555534897/source", "state": "file", "uid": 0} >2018-10-02 08:33:25,148 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_dhcp.json) => {"changed": true, "checksum": "058d5a1972085dcd7cdadcaa416c9cbb2382cda2", "dest": "/var/lib/kolla/config_files/neutron_dhcp.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_dhcp.json", {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}], "md5sum": "3d55d72eb7fff4e8f3754ee9770b22d2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483604.69-69015959008331/source", "state": "file", "uid": 0} >2018-10-02 08:33:25,608 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_l3_agent.json) => {"changed": true, "checksum": "f5ffbfdade14575cf8c53d18447e2b2b9c59cac7", "dest": "/var/lib/kolla/config_files/neutron_l3_agent.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_l3_agent.json", {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "md5sum": "e159dec67bbce60437c2ee885efa6b27", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 844, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483605.16-20961141096923/source", "state": "file", "uid": 0} >2018-10-02 08:33:26,066 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_metadata_agent.json) => {"changed": true, "checksum": "cd52f696acdcff22cd6714ce45a850b21eab4d9e", "dest": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_metadata_agent.json", {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "md5sum": "f5e7ef39070696edf4df9ac35bb4aa35", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 827, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483605.62-194808202386263/source", "state": "file", "uid": 0} >2018-10-02 08:33:26,491 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": true, "checksum": "297543dc37af33605befea77ef4a371f0a6a3662", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "md5sum": "51a8878fe08bb182bee7ac73da2e17d3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 414, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483606.08-252186594965710/source", "state": "file", "uid": 0} >2018-10-02 08:33:26,943 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_server_tls_proxy.json) => {"changed": true, "checksum": "20bba94ac1ce7afb7fd0793567a9fe48300d1a15", "dest": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_server_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "bda59eb8d2adeb0f47b803f83819cb93", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483606.5-117304551282524/source", "state": "file", "uid": 0} >2018-10-02 08:33:27,419 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_api.json) => {"changed": true, "checksum": "44ed45616466b118b8c77858c293e379b590863d", "dest": "/var/lib/kolla/config_files/nova_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "384513e893d6ff439145e291b5ddd786", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 403, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483606.95-167383177331948/source", "state": "file", "uid": 0} >2018-10-02 08:33:27,879 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_api_cron.json) => {"changed": true, "checksum": "9faed2be90b741cddf13fb61327173d1b58847c5", "dest": "/var/lib/kolla/config_files/nova_api_cron.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "ce0cf11faae2d6c4ca22fb929827d0c8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 393, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483607.43-158992077929104/source", "state": "file", "uid": 0} >2018-10-02 08:33:28,356 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_conductor.json) => {"changed": true, "checksum": "9e93d361bdd695857cfec8d32309445f8508fa80", "dest": "/var/lib/kolla/config_files/nova_conductor.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_conductor.json", {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "86ab49ff06297d94cfb501948e69aba6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 399, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483607.89-270837728126594/source", "state": "file", "uid": 0} >2018-10-02 08:33:28,807 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_consoleauth.json) => {"changed": true, "checksum": "87c1b1409f70be6c58ecff47b5ed82c4fe98a20e", "dest": "/var/lib/kolla/config_files/nova_consoleauth.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_consoleauth.json", {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "5fd5c52813e4e48f0556254cb98e6e2c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 401, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483608.37-148300959239475/source", "state": "file", "uid": 0} >2018-10-02 08:33:29,294 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_metadata.json) => {"changed": true, "checksum": "9486a72b72c8b74a8db176060403f69d46b47a43", "dest": "/var/lib/kolla/config_files/nova_metadata.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_metadata.json", {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "fcf00203f1d0e35dc5d6e3032c41f168", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 402, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483608.82-171803865092350/source", "state": "file", "uid": 0} >2018-10-02 08:33:29,751 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_placement.json) => {"changed": true, "checksum": "44ed45616466b118b8c77858c293e379b590863d", "dest": "/var/lib/kolla/config_files/nova_placement.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_placement.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "384513e893d6ff439145e291b5ddd786", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 403, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483609.3-247871882458317/source", "state": "file", "uid": 0} >2018-10-02 08:33:30,222 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_scheduler.json) => {"changed": true, "checksum": "54c5708c92f2f717a8804ec7cf58c66648398685", "dest": "/var/lib/kolla/config_files/nova_scheduler.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_scheduler.json", {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "46d2917b61186faac8a91082715ada76", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 399, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483609.76-33782579286488/source", "state": "file", "uid": 0} >2018-10-02 08:33:30,672 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_vnc_proxy.json) => {"changed": true, "checksum": "1602845294bee8781ed7124c5e61794d5174a570", "dest": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_vnc_proxy.json", {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "root:nova", "path": "/etc/pki/tls/private/novnc_proxy.key"}]}], "md5sum": "43607a8514595d41a4ef66df9ef5c82b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 751, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483610.23-77787206442448/source", "state": "file", "uid": 0} >2018-10-02 08:33:31,145 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/panko_api.json) => {"changed": true, "checksum": "d6e42dfd0293a2e8eb981dbb63aa49bf424c8e53", "dest": "/var/lib/kolla/config_files/panko_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/panko_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}], "md5sum": "fd86d49907f869879bfac107c48a4515", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 406, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483610.68-244990923196421/source", "state": "file", "uid": 0} >2018-10-02 08:33:31,595 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/rabbitmq.json) => {"changed": true, "checksum": "a1699d6d38b070ef10a31b28e09827b21c832053", "dest": "/var/lib/kolla/config_files/rabbitmq.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/rabbitmq.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}], "md5sum": "47843c1764359e4e90142aaaaf4a712f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1295, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483611.15-6624579121961/source", "state": "file", "uid": 0} >2018-10-02 08:33:32,068 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/redis.json) => {"changed": true, "checksum": "e28b5f6e4c0c330004d1adcadc7854bb6fb6a276", "dest": "/var/lib/kolla/config_files/redis.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/redis.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}]}], "md5sum": "4f3e9e8b7a99b46afad0a1c46fba9b37", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 863, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483611.61-20808092873412/source", "state": "file", "uid": 0} >2018-10-02 08:33:32,554 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/redis_tls_proxy.json) => {"changed": true, "checksum": "a5aefe3f08ebc2eb779b4b5d84f1bdcf52212da7", "dest": "/var/lib/kolla/config_files/redis_tls_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/redis_tls_proxy.json", {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"optional": true, "owner": "root:root", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "root:root", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}], "md5sum": "5fbd6db7922fa356062d34d68189986c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 834, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483612.08-50153643232969/source", "state": "file", "uid": 0} >2018-10-02 08:33:33,028 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/sahara-api.json) => {"changed": true, "checksum": "ac32d17e2d9a2ddbe9fe3f16850643ddea7b8241", "dest": "/var/lib/kolla/config_files/sahara-api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/sahara-api.json", {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "md5sum": "55e005a0ea1189fe2fdaec2aa067c9ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 567, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483612.56-77945485725838/source", "state": "file", "uid": 0} >2018-10-02 08:33:33,525 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/sahara-engine.json) => {"changed": true, "checksum": "d1df68a467581e77f333f3b298e7468d481cd4f9", "dest": "/var/lib/kolla/config_files/sahara-engine.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/sahara-engine.json", {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "md5sum": "28f461b63d8f387e621270b48c173c14", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 570, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483613.04-158350659824810/source", "state": "file", "uid": 0} >2018-10-02 08:33:34,020 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_auditor.json) => {"changed": true, "checksum": "f443ddd7e1a092183f1b1bbfeb907cfa02350b8e", "dest": "/var/lib/kolla/config_files/swift_account_auditor.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_account_auditor.json", {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "fed3aeb2bc74d1bddee73605c9721620", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 286, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483613.53-4994279659173/source", "state": "file", "uid": 0} >2018-10-02 08:33:34,506 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_reaper.json) => {"changed": true, "checksum": "f738705362f769b5c58dbc9c992f47e85f1ab843", "dest": "/var/lib/kolla/config_files/swift_account_reaper.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_account_reaper.json", {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "6b53a6cd98296db748d8be17516c9ee9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483614.03-223944307048429/source", "state": "file", "uid": 0} >2018-10-02 08:33:34,972 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_replicator.json) => {"changed": true, "checksum": "ca1380e0b1137ad3d00ea1072626895f4fe49d47", "dest": "/var/lib/kolla/config_files/swift_account_replicator.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_account_replicator.json", {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "ea95efcb272cc6b7461042449930b907", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 289, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483614.51-261290034610778/source", "state": "file", "uid": 0} >2018-10-02 08:33:35,463 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_server.json) => {"changed": true, "checksum": "bab06bf4ffa6e74dc1350557f8a6ee04932bd706", "dest": "/var/lib/kolla/config_files/swift_account_server.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_account_server.json", {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "7759d7f076a692679c06e6ed62af4515", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483614.98-53411299992506/source", "state": "file", "uid": 0} >2018-10-02 08:33:35,937 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_auditor.json) => {"changed": true, "checksum": "0eb4f95e78f6179fffb63db4d145e159589d34bb", "dest": "/var/lib/kolla/config_files/swift_container_auditor.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_container_auditor.json", {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "5ab871921503ca9c5ae5199392c032da", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 290, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483615.47-101032396034681/source", "state": "file", "uid": 0} >2018-10-02 08:33:36,411 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_replicator.json) => {"changed": true, "checksum": "0f5cdcae0a9852bb0409d285c255b32f4e5b5aad", "dest": "/var/lib/kolla/config_files/swift_container_replicator.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_container_replicator.json", {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "8ea9581644b6b529790c49c4affb5248", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 293, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483615.95-274280386095577/source", "state": "file", "uid": 0} >2018-10-02 08:33:36,876 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_server.json) => {"changed": true, "checksum": "91edd76df109b9e85b08c44212af66ce68b703cc", "dest": "/var/lib/kolla/config_files/swift_container_server.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_container_server.json", {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "8cc885791431eef6c91a6c1795ebae5d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 289, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483616.42-25123062257536/source", "state": "file", "uid": 0} >2018-10-02 08:33:37,331 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_updater.json) => {"changed": true, "checksum": "5806fb41d64e1ec9927f04cee62782d0ad2220ad", "dest": "/var/lib/kolla/config_files/swift_container_updater.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_container_updater.json", {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "a11530a8453ca2aced8b757baded7afa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 290, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483616.88-103930846481079/source", "state": "file", "uid": 0} >2018-10-02 08:33:37,787 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_auditor.json) => {"changed": true, "checksum": "917b1916fd92fb4118912953f92968148232f0b4", "dest": "/var/lib/kolla/config_files/swift_object_auditor.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_object_auditor.json", {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "a46509d71c3a164b7337486fc72c21eb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 284, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483617.34-252797262548463/source", "state": "file", "uid": 0} >2018-10-02 08:33:38,275 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_expirer.json) => {"changed": true, "checksum": "bcd142a3190958913657993b2d0370b8b50d8de6", "dest": "/var/lib/kolla/config_files/swift_object_expirer.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_object_expirer.json", {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "b3e309e5012e5a0de7897d4910b743e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483617.8-32025929492552/source", "state": "file", "uid": 0} >2018-10-02 08:33:38,744 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_replicator.json) => {"changed": true, "checksum": "71861a048120b6189ee51944215bf6f35060f641", "dest": "/var/lib/kolla/config_files/swift_object_replicator.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_object_replicator.json", {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "a1a58d8bb7898e3d50d84cb6d0b6c295", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 287, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483618.28-263065069180457/source", "state": "file", "uid": 0} >2018-10-02 08:33:39,208 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_server.json) => {"changed": true, "checksum": "0fc6810e1c10d6510da93c21a7ef2f3f5da07470", "dest": "/var/lib/kolla/config_files/swift_object_server.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_object_server.json", {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}], "md5sum": "998a973b45a0b4d58ffc3846445ae2f4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 438, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483618.75-168296961196539/source", "state": "file", "uid": 0} >2018-10-02 08:33:39,682 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_updater.json) => {"changed": true, "checksum": "05003e9fbb4c2e4b1582d568a56a819c4c861747", "dest": "/var/lib/kolla/config_files/swift_object_updater.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_object_updater.json", {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "6ba83369ac04258591882d0ca18861b7", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 284, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483619.22-262813142752961/source", "state": "file", "uid": 0} >2018-10-02 08:33:40,143 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy.json) => {"changed": true, "checksum": "4cabf21d4f9d5c422dd56beda1075370c5c0578d", "dest": "/var/lib/kolla/config_files/swift_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_proxy.json", {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "94550794bfe1ed7707c5aa631b14664f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 281, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483619.69-85000118361078/source", "state": "file", "uid": 0} >2018-10-02 08:33:40,584 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy_tls_proxy.json) => {"changed": true, "checksum": "20bba94ac1ce7afb7fd0793567a9fe48300d1a15", "dest": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "bda59eb8d2adeb0f47b803f83819cb93", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483620.15-251701110464552/source", "state": "file", "uid": 0} >2018-10-02 08:33:41,047 p=1004 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_rsync.json) => {"changed": true, "checksum": "6ac960e4f5a1bb13c557a47292a7d63517d1b75d", "dest": "/var/lib/kolla/config_files/swift_rsync.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_rsync.json", {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "f80d86d94e23c4a21e131a520023d48e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 286, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483620.59-39765095086841/source", "state": "file", "uid": 0} >2018-10-02 08:33:41,108 p=1004 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-10-02 08:33:41,108 p=1004 u=mistral | Tuesday 02 October 2018 08:33:41 -0400 (0:00:30.858) 0:04:53.841 ******* >2018-10-02 08:33:41,123 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:33:41,147 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:33:41,174 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:33:41,203 p=1004 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-10-02 08:33:41,204 p=1004 u=mistral | Tuesday 02 October 2018 08:33:41 -0400 (0:00:00.095) 0:04:53.937 ******* >2018-10-02 08:33:41,739 p=1004 u=mistral | changed: [controller-0] => (item=step_3) => {"changed": true, "checksum": "f95a667e13f830f3654131f0f75b234e7583eada", "dest": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "gid": 0, "group": "root", "item": ["step_3", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]], "md5sum": "3cb02ed98d510494fae3b905d481887e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 444, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483621.27-139090365380699/source", "state": "file", "uid": 0} >2018-10-02 08:33:42,259 p=1004 u=mistral | changed: [controller-0] => (item=step_4) => {"changed": true, "checksum": "54032a2f094e88383168daf9a4c4272527eb58c2", "dest": "/var/lib/docker-puppet/docker-puppet-tasks4.json", "gid": 0, "group": "root", "item": ["step_4", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "config_volume": "cinder_init_tasks", "puppet_tags": "cinder_config,cinder_type,file,concat,file_line", "step_config": "include ::tripleo::profile::base::cinder::api", "volumes": ["/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro"]}]], "md5sum": "39336ca7617002b5943f604caee3cea5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 399, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483621.75-112327699275324/source", "state": "file", "uid": 0} >2018-10-02 08:33:42,289 p=1004 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-10-02 08:33:42,289 p=1004 u=mistral | Tuesday 02 October 2018 08:33:42 -0400 (0:00:01.085) 0:04:55.022 ******* >2018-10-02 08:33:42,322 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:42,351 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:42,368 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:42,394 p=1004 u=mistral | TASK [Check for /etc/puppet/check-mode directory for check mode] *************** >2018-10-02 08:33:42,394 p=1004 u=mistral | Tuesday 02 October 2018 08:33:42 -0400 (0:00:00.105) 0:04:55.127 ******* >2018-10-02 08:33:42,425 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:42,454 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:42,464 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:42,490 p=1004 u=mistral | TASK [Create /etc/puppet/check-mode/hieradata directory for check mode] ******** >2018-10-02 08:33:42,490 p=1004 u=mistral | Tuesday 02 October 2018 08:33:42 -0400 (0:00:00.096) 0:04:55.223 ******* >2018-10-02 08:33:42,520 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:42,545 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:42,552 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:42,573 p=1004 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-10-02 08:33:42,573 p=1004 u=mistral | Tuesday 02 October 2018 08:33:42 -0400 (0:00:00.083) 0:04:55.307 ******* >2018-10-02 08:33:43,120 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483622.61-183317516213301/source", "state": "file", "uid": 0} >2018-10-02 08:33:43,178 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483622.64-170319198585315/source", "state": "file", "uid": 0} >2018-10-02 08:33:43,221 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538483622.67-144012723972526/source", "state": "file", "uid": 0} >2018-10-02 08:33:43,251 p=1004 u=mistral | TASK [Create puppet check-mode files if they don't exist for check mode] ******* >2018-10-02 08:33:43,252 p=1004 u=mistral | Tuesday 02 October 2018 08:33:43 -0400 (0:00:00.678) 0:04:55.985 ******* >2018-10-02 08:33:43,284 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:43,311 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:43,323 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:33:43,352 p=1004 u=mistral | TASK [Run puppet host configuration for step 1] ******************************** >2018-10-02 08:33:43,352 p=1004 u=mistral | Tuesday 02 October 2018 08:33:43 -0400 (0:00:00.100) 0:04:56.085 ******* >2018-10-02 08:33:59,599 p=1004 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:34:02,113 p=1004 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:35:08,919 p=1004 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:35:08,948 p=1004 u=mistral | TASK [Debug output for task: Run puppet host configuration for step 1] ********* >2018-10-02 08:35:08,948 p=1004 u=mistral | Tuesday 02 October 2018 08:35:08 -0400 (0:01:25.596) 0:06:21.682 ******* >2018-10-02 08:35:09,099 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.93 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}f47467dc7908161e5e0e39e67daa454e'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/owner: owner changed 'root' to 'hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/group: group changed 'root' to 'haclient'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/mode: mode changed '0755' to '0750'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/content: content changed '{md5}8578e0beb38a194414fa1615ea345b62' to '{md5}85274b5d58af3572868d4ef10722b50f'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/mode: mode changed '0400' to '0640'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 74.57 seconds", > "Changes:", > " Total: 169", > "Events:", > " Success: 169", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 215", > " Restarted: 5", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " User: 0.05", > " Sysctl: 0.14", > " Sysctl runtime: 0.21", > " File: 0.28", > " Package: 0.42", > " Pcmk property: 1.14", > " Firewall: 14.62", > " Last run: 1538483708", > " Service: 2.54", > " Config retrieval: 3.41", > " Exec: 52.13", > " Total: 74.97", > " Filebucket: 0.00", > "Version:", > " Config: 1538483630", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:35:09,118 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.94 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}f47467dc7908161e5e0e39e67daa454e'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 7.16 seconds", > "Changes:", > " Total: 92", > "Events:", > " Success: 92", > "Resources:", > " Total: 134", > " Restarted: 3", > " Out of sync: 92", > " Changed: 92", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.06", > " File: 0.16", > " Sysctl runtime: 0.24", > " Package: 0.26", > " Service: 1.41", > " Firewall: 1.59", > " Last run: 1538483639", > " Exec: 2.06", > " Config retrieval: 2.24", > " Filebucket: 0.00", > " Total: 8.04", > " Concat fragment: 0.00", > "Version:", > " Config: 1538483629", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:35:09,812 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.79 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Tuned/Exec[tuned-adm]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}f47467dc7908161e5e0e39e67daa454e'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 9.47 seconds", > "Changes:", > " Total: 99", > "Events:", > " Success: 99", > "Resources:", > " Total: 140", > " Restarted: 3", > " Out of sync: 99", > " Changed: 99", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " File: 0.10", > " Sysctl: 0.12", > " Sysctl runtime: 0.17", > " Package: 0.25", > " Service: 1.15", > " Total: 10.26", > " Last run: 1538483641", > " Config retrieval: 2.08", > " Firewall: 2.35", > " Exec: 4.01", > " Concat fragment: 0.00", > "Version:", > " Config: 1538483630", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:35:09,846 p=1004 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 1] ***************** >2018-10-02 08:35:09,846 p=1004 u=mistral | Tuesday 02 October 2018 08:35:09 -0400 (0:00:00.897) 0:06:22.579 ******* >2018-10-02 08:35:33,140 p=1004 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:36:08,152 p=1004 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:38:08,987 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:38:09,011 p=1004 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (generate config) during step 1] *** >2018-10-02 08:38:09,011 p=1004 u=mistral | Tuesday 02 October 2018 08:38:09 -0400 (0:02:59.164) 0:09:21.744 ******* >2018-10-02 08:38:09,113 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-10-02 12:35:10,177 INFO: 16626 -- Running docker-puppet", > "2018-10-02 12:35:10,177 DEBUG: 16626 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-10-02 12:35:10,177 DEBUG: 16626 -- config_volume crond", > "2018-10-02 12:35:10,178 DEBUG: 16626 -- puppet_tags ", > "2018-10-02 12:35:10,178 DEBUG: 16626 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 12:35:10,178 DEBUG: 16626 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:10,178 DEBUG: 16626 -- volumes []", > "2018-10-02 12:35:10,178 DEBUG: 16626 -- Adding new service", > "2018-10-02 12:35:10,178 INFO: 16626 -- Service compilation completed.", > "2018-10-02 12:35:10,179 DEBUG: 16626 -- CHECK_MODE: 0", > "2018-10-02 12:35:10,179 DEBUG: 16626 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,179 INFO: 16626 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-10-02 12:35:10,194 INFO: 16627 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:10,195 DEBUG: 16627 -- config_volume crond", > "2018-10-02 12:35:10,195 DEBUG: 16627 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-10-02 12:35:10,195 DEBUG: 16627 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 12:35:10,195 DEBUG: 16627 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:10,195 DEBUG: 16627 -- volumes []", > "2018-10-02 12:35:10,195 DEBUG: 16627 -- check_mode 0", > "2018-10-02 12:35:10,196 INFO: 16627 -- Removing container: docker-puppet-crond", > "2018-10-02 12:35:10,294 INFO: 16627 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:24,880 DEBUG: 16627 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "0f4899fadd7f: Pulling fs layer", > "4d80de3c75a6: Pulling fs layer", > "4d80de3c75a6: Waiting", > "e17262bc2341: Download complete", > "4d80de3c75a6: Verifying Checksum", > "4d80de3c75a6: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "0f4899fadd7f: Verifying Checksum", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "0f4899fadd7f: Pull complete", > "4d80de3c75a6: Pull complete", > "Digest: sha256:d7abfe49c737904a24b4da901cd357c6a9aba94959e6be50bdb830a6a32fec7b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "", > "2018-10-02 12:35:24,886 DEBUG: 16627 -- NET_HOST enabled", > "2018-10-02 12:35:24,886 DEBUG: 16627 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=ceph-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpojNOEs:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:32,947 DEBUG: 16627 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 0.44 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}f121ac457cb6e71964450c8cbc0a2431'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.54", > " Total: 0.55", > " Last run: 1538483732", > "Version:", > " Config: 1538483731", > " Puppet: 4.8.2", > "Gathering files modified after 2018-10-02 12:35:25.175261004 +0000", > "2018-10-02 12:35:32,948 DEBUG: 16627 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ CHECK_MODE=", > "+ '[' -d /tmp/puppet-check-mode ']'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=ceph-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:25.175261004 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ EXCLUDE='--exclude=*/etc/swift/backups/* --exclude=*/etc/swift/*.ring.gz --exclude=*/etc/swift/*.builder --exclude=*/etc/libvirt/passwd.db'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/crond", > "+ tar xO", > "tar: Removing leading `/' from member names", > "+ md5sum", > "+ awk '{print $1}'", > "+ sed '/^#.*HEADER.*/d'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-10-02 12:35:32,948 INFO: 16627 -- Removing container: docker-puppet-crond", > "2018-10-02 12:35:32,987 DEBUG: 16627 -- docker-puppet-crond", > "2018-10-02 12:35:32,988 INFO: 16627 -- Finished processing puppet configs for crond", > "2018-10-02 12:35:32,989 DEBUG: 16626 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-10-02 12:35:32,989 DEBUG: 16626 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-10-02 12:35:32,993 DEBUG: 16626 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 12:35:32,993 DEBUG: 16626 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 12:35:32,993 DEBUG: 16626 -- Updating config hash for logrotate_crond, config_volume=crond hash=6f2a5e23a896d70ebbc2c66d87cd9266" > ] >} >2018-10-02 08:38:09,220 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-10-02 12:35:10,200 INFO: 18771 -- Running docker-puppet", > "2018-10-02 12:35:10,200 DEBUG: 18771 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-10-02 12:35:10,201 DEBUG: 18771 -- config_volume ceilometer", > "2018-10-02 12:35:10,201 DEBUG: 18771 -- puppet_tags ceilometer_config", > "2018-10-02 12:35:10,201 DEBUG: 18771 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "", > "2018-10-02 12:35:10,201 DEBUG: 18771 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:35:10,201 DEBUG: 18771 -- volumes []", > "2018-10-02 12:35:10,201 DEBUG: 18771 -- Adding new service", > "2018-10-02 12:35:10,201 DEBUG: 18771 -- config_volume neutron", > "2018-10-02 12:35:10,201 DEBUG: 18771 -- puppet_tags neutron_plugin_ml2", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- volumes []", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- Adding new service", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- config_volume neutron", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- config_volume iscsid", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- puppet_tags iscsid_config", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- manifest include ::tripleo::profile::base::iscsid", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- config_volume nova_libvirt", > "2018-10-02 12:35:10,202 DEBUG: 18771 -- puppet_tags nova_config,nova_paste_api_ini", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "# We'll probably treat it like we do with Neutron plugins.", > "# Until then, just include it in the default nova-compute role.", > "include tripleo::profile::base::nova::compute::libvirt", > "include ::tripleo::profile::base::database::mysql::client", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- volumes []", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- Adding new service", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- config_volume nova_libvirt", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- puppet_tags libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- manifest include tripleo::profile::base::nova::libvirt", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- puppet_tags ", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- manifest include ::tripleo::profile::base::sshd", > "include tripleo::profile::base::nova::migration::target", > "2018-10-02 12:35:10,203 DEBUG: 18771 -- config_volume crond", > "2018-10-02 12:35:10,204 DEBUG: 18771 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 12:35:10,204 DEBUG: 18771 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:10,204 DEBUG: 18771 -- volumes []", > "2018-10-02 12:35:10,204 DEBUG: 18771 -- Adding new service", > "2018-10-02 12:35:10,204 INFO: 18771 -- Service compilation completed.", > "2018-10-02 12:35:10,205 DEBUG: 18771 -- CHECK_MODE: 0", > "2018-10-02 12:35:10,205 DEBUG: 18771 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,205 DEBUG: 18771 -- - [u'nova_libvirt', u'file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password', u\"# TODO(emilien): figure how to deal with libvirt profile.\\n# We'll probably treat it like we do with Neutron plugins.\\n# Until then, just include it in the default nova-compute role.\\ninclude tripleo::profile::base::nova::compute::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sshd\\ninclude tripleo::profile::base::nova::migration::target\", u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,205 DEBUG: 18771 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,205 DEBUG: 18771 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 0]", > "2018-10-02 12:35:10,205 DEBUG: 18771 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1', [u'/etc/iscsi:/etc/iscsi'], 0]", > "2018-10-02 12:35:10,205 INFO: 18771 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-10-02 12:35:10,218 INFO: 18772 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:35:10,218 DEBUG: 18772 -- config_volume ceilometer", > "2018-10-02 12:35:10,218 INFO: 18773 -- Starting configuration of nova_libvirt using image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 12:35:10,219 DEBUG: 18772 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config", > "2018-10-02 12:35:10,219 DEBUG: 18773 -- config_volume nova_libvirt", > "2018-10-02 12:35:10,219 DEBUG: 18772 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-10-02 12:35:10,219 DEBUG: 18772 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:35:10,219 DEBUG: 18773 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-10-02 12:35:10,219 DEBUG: 18772 -- volumes []", > "2018-10-02 12:35:10,219 DEBUG: 18773 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "include tripleo::profile::base::nova::libvirt", > "include ::tripleo::profile::base::sshd", > "2018-10-02 12:35:10,219 DEBUG: 18772 -- check_mode 0", > "2018-10-02 12:35:10,219 DEBUG: 18773 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 12:35:10,219 DEBUG: 18773 -- volumes []", > "2018-10-02 12:35:10,219 DEBUG: 18773 -- check_mode 0", > "2018-10-02 12:35:10,220 INFO: 18772 -- Removing container: docker-puppet-ceilometer", > "2018-10-02 12:35:10,220 INFO: 18773 -- Removing container: docker-puppet-nova_libvirt", > "2018-10-02 12:35:10,221 INFO: 18774 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:10,221 DEBUG: 18774 -- config_volume crond", > "2018-10-02 12:35:10,221 DEBUG: 18774 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-10-02 12:35:10,221 DEBUG: 18774 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 12:35:10,221 DEBUG: 18774 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:10,221 DEBUG: 18774 -- volumes []", > "2018-10-02 12:35:10,222 DEBUG: 18774 -- check_mode 0", > "2018-10-02 12:35:10,222 INFO: 18774 -- Removing container: docker-puppet-crond", > "2018-10-02 12:35:10,319 INFO: 18774 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:10,335 INFO: 18772 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:35:10,335 INFO: 18773 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 12:35:24,379 DEBUG: 18774 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "0f4899fadd7f: Pulling fs layer", > "4d80de3c75a6: Pulling fs layer", > "4d80de3c75a6: Waiting", > "e17262bc2341: Verifying Checksum", > "e17262bc2341: Download complete", > "4d80de3c75a6: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "0f4899fadd7f: Verifying Checksum", > "0f4899fadd7f: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "0f4899fadd7f: Pull complete", > "4d80de3c75a6: Pull complete", > "Digest: sha256:d7abfe49c737904a24b4da901cd357c6a9aba94959e6be50bdb830a6a32fec7b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:24,382 DEBUG: 18774 -- NET_HOST enabled", > "2018-10-02 12:35:24,383 DEBUG: 18774 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmppxjpKy:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:30,371 DEBUG: 18772 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "ff59208988ad: Pulling fs layer", > "5fcda0d83a5e: Pulling fs layer", > "2142eca15b92: Pulling fs layer", > "ff59208988ad: Waiting", > "5fcda0d83a5e: Waiting", > "2142eca15b92: Waiting", > "ff59208988ad: Verifying Checksum", > "ff59208988ad: Download complete", > "5fcda0d83a5e: Verifying Checksum", > "5fcda0d83a5e: Download complete", > "2142eca15b92: Verifying Checksum", > "2142eca15b92: Download complete", > "ff59208988ad: Pull complete", > "5fcda0d83a5e: Pull complete", > "2142eca15b92: Pull complete", > "Digest: sha256:ba6a24fd5b438c2530cbd903d1b4616e6075f146618be39391273ae43949bbad", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:35:30,374 DEBUG: 18772 -- NET_HOST enabled", > "2018-10-02 12:35:30,374 DEBUG: 18772 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config --env NAME=ceilometer --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpzsctTj:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:35:33,111 DEBUG: 18774 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.45 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}f121ac457cb6e71964450c8cbc0a2431'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.03 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.55", > " Total: 0.56", > " Last run: 1538483731", > "Version:", > " Config: 1538483731", > " Puppet: 4.8.2", > "Gathering files modified after 2018-10-02 12:35:24.731541835 +0000", > "2018-10-02 12:35:33,111 DEBUG: 18774 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ CHECK_MODE=", > "+ '[' -d /tmp/puppet-check-mode ']'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=compute-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:24.731541835 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ EXCLUDE='--exclude=*/etc/swift/backups/* --exclude=*/etc/swift/*.ring.gz --exclude=*/etc/swift/*.builder --exclude=*/etc/libvirt/passwd.db'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/crond", > "+ tar xO", > "tar: Removing leading `/' from member names", > "+ sed '/^#.*HEADER.*/d'", > "+ md5sum", > "+ awk '{print $1}'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-10-02 12:35:33,111 INFO: 18774 -- Removing container: docker-puppet-crond", > "2018-10-02 12:35:33,166 DEBUG: 18774 -- docker-puppet-crond", > "2018-10-02 12:35:33,166 INFO: 18774 -- Finished processing puppet configs for crond", > "2018-10-02 12:35:33,167 INFO: 18774 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:35:33,167 DEBUG: 18774 -- config_volume neutron", > "2018-10-02 12:35:33,167 DEBUG: 18774 -- puppet_tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-10-02 12:35:33,167 DEBUG: 18774 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "include ::tripleo::profile::base::neutron::ovs", > "2018-10-02 12:35:33,167 DEBUG: 18774 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:35:33,167 DEBUG: 18774 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-10-02 12:35:33,167 DEBUG: 18774 -- check_mode 0", > "2018-10-02 12:35:33,169 INFO: 18774 -- Removing container: docker-puppet-neutron", > "2018-10-02 12:35:33,275 INFO: 18774 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:35:40,827 DEBUG: 18772 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.26 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.69 seconds", > " Total: 24", > " Success: 24", > " Total: 139", > " Skipped: 22", > " Out of sync: 24", > " Changed: 24", > " Ceilometer config: 0.59", > " Config retrieval: 1.49", > " Last run: 1538483739", > " Total: 2.08", > " Resources: 0.00", > " Config: 1538483737", > "Gathering files modified after 2018-10-02 12:35:30.768974584 +0000", > "2018-10-02 12:35:40,828 DEBUG: 18772 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config /etc/config.pp", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: Scope(Class[Ceilometer::Dispatcher::Gnocchi]): The class ceilometer::dispatcher::gnocchi is deprecated. All its", > " options must be set as url parameters in", > " ceilometer::agent::notification::pipeline_publishers. Depending of the used", > " Gnocchi version their might be ignored.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:30.768974584 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/ceilometer", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-10-02 12:35:40,828 INFO: 18772 -- Removing container: docker-puppet-ceilometer", > "2018-10-02 12:35:40,882 DEBUG: 18772 -- docker-puppet-ceilometer", > "2018-10-02 12:35:40,882 INFO: 18772 -- Finished processing puppet configs for ceilometer", > "2018-10-02 12:35:40,883 INFO: 18772 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:35:40,883 DEBUG: 18772 -- config_volume iscsid", > "2018-10-02 12:35:40,883 DEBUG: 18772 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-10-02 12:35:40,883 DEBUG: 18772 -- manifest include ::tripleo::profile::base::iscsid", > "2018-10-02 12:35:40,883 DEBUG: 18772 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:35:40,884 DEBUG: 18772 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-10-02 12:35:40,884 DEBUG: 18772 -- check_mode 0", > "2018-10-02 12:35:40,885 INFO: 18772 -- Removing container: docker-puppet-iscsid", > "2018-10-02 12:35:40,992 INFO: 18772 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:35:41,430 DEBUG: 18774 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "f3c66d22e08b: Pulling fs layer", > "6cca3e1c80e1: Pulling fs layer", > "d405f46408bf: Pulling fs layer", > "d405f46408bf: Verifying Checksum", > "d405f46408bf: Download complete", > "6cca3e1c80e1: Verifying Checksum", > "6cca3e1c80e1: Download complete", > "f3c66d22e08b: Verifying Checksum", > "f3c66d22e08b: Download complete", > "f3c66d22e08b: Pull complete", > "6cca3e1c80e1: Pull complete", > "d405f46408bf: Pull complete", > "Digest: sha256:0c7ace86b7c08a5ec94dbf283b5a7a95f0678caf8c830185bcfc7a5dbaec5704", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:35:41,434 DEBUG: 18774 -- NET_HOST enabled", > "2018-10-02 12:35:41,434 DEBUG: 18774 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpMNYuhb:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:35:41,935 DEBUG: 18772 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "2afcd4790b43: Pulling fs layer", > "2afcd4790b43: Verifying Checksum", > "2afcd4790b43: Download complete", > "2afcd4790b43: Pull complete", > "Digest: sha256:b516e920a95255994d6493d4a922af867754e570e2afe8afeaa5c2f3e25a6d94", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:35:41,938 DEBUG: 18772 -- NET_HOST enabled", > "2018-10-02 12:35:41,938 DEBUG: 18772 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpSyUecF:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:35:47,017 DEBUG: 18773 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-compute ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-compute", > "9e28a9d49d0f: Pulling fs layer", > "eff4ef11e8d6: Pulling fs layer", > "9e28a9d49d0f: Waiting", > "eff4ef11e8d6: Waiting", > "9e28a9d49d0f: Verifying Checksum", > "9e28a9d49d0f: Download complete", > "eff4ef11e8d6: Verifying Checksum", > "eff4ef11e8d6: Download complete", > "9e28a9d49d0f: Pull complete", > "eff4ef11e8d6: Pull complete", > "Digest: sha256:9cbbdf47aea4339ed69ccc5d376981d41ee8a96efdf03e25708c9cf540b0c4ac", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 12:35:47,020 DEBUG: 18773 -- NET_HOST enabled", > "2018-10-02 12:35:47,021 DEBUG: 18773 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_libvirt --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpwO5qfl:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 12:35:49,857 DEBUG: 18772 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.57 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > " Total: 10", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.65", > " Total: 0.66", > " Last run: 1538483748", > " Config: 1538483748", > "Gathering files modified after 2018-10-02 12:35:42.210897415 +0000", > "2018-10-02 12:35:49,857 DEBUG: 18772 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:42.210897415 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/iscsid", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-10-02 12:35:49,857 INFO: 18772 -- Removing container: docker-puppet-iscsid", > "2018-10-02 12:35:49,896 DEBUG: 18772 -- docker-puppet-iscsid", > "2018-10-02 12:35:49,896 INFO: 18772 -- Finished processing puppet configs for iscsid", > "2018-10-02 12:35:52,976 DEBUG: 18774 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.54 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 0.80 seconds", > " Total: 45", > " Success: 45", > " Total: 175", > " Skipped: 27", > " Out of sync: 45", > " Changed: 45", > " Neutron agent ovs: 0.02", > " Neutron plugin ml2: 0.08", > " Neutron config: 0.56", > " Last run: 1538483751", > " Config retrieval: 2.79", > " Total: 3.45", > "Gathering files modified after 2018-10-02 12:35:41.727942959 +0000", > "2018-10-02 12:35:52,977 DEBUG: 18774 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 492]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/plugins/ml2.pp\", 53]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 136]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync_srcs+=' /var/www'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:41.727942959 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/neutron", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-10-02 12:35:52,977 INFO: 18774 -- Removing container: docker-puppet-neutron", > "2018-10-02 12:35:53,016 DEBUG: 18774 -- docker-puppet-neutron", > "2018-10-02 12:35:53,017 INFO: 18774 -- Finished processing puppet configs for neutron", > "2018-10-02 12:36:07,992 DEBUG: 18773 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.88 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{md5}056b96e7e8124e1bc55f77cba4e68ce7' to '{md5}b308b1b1aab82c160024dac0f6ad10ca'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{md5}09c4fa846e8e27bfa3ab3325900d63ea' to '{md5}2f138c0278e1b666ec77a6d8ba3054a1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{md5}dff145cb4e519333c0096aae8de2e77c' to '{md5}6fdbf752a1ce3b21f1303d4e498607a1'", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/heal_instance_info_cache_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/vncserver_proxyclient_address]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/keymap]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[glance/verify_glance_signatures]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tls]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tcp]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{md5}bd4018244d6d12704b4681795c9abf60'", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/vncserver_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_group]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_ro]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_rw]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_ro_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_rw_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}40d961cd3154f0439fcac1a50bd77b96' to '{md5}5d943a01ffd64865ad5d5710b467b752'", > "Notice: Applied catalog in 9.26 seconds", > " Total: 108", > " Success: 108", > " Changed: 108", > " Out of sync: 108", > " Total: 324", > " Skipped: 48", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File line: 0.00", > " Exec: 0.01", > " Libvirtd config: 0.02", > " File: 0.04", > " Package: 0.09", > " Augeas: 1.17", > " Total: 12.16", > " Last run: 1538483766", > " Config retrieval: 3.35", > " Nova config: 7.47", > " Config: 1538483753", > "Gathering files modified after 2018-10-02 12:35:47.231423816 +0000", > "2018-10-02 12:36:07,993 DEBUG: 18773 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password'", > "+ origin_of_time=/var/lib/config-data/nova_libvirt.origin_of_time", > "+ touch /var/lib/config-data/nova_libvirt.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 551]", > "Warning: Scope(Class[Nova]): nova::use_syslog, nova::use_stderr, nova::log_facility, nova::log_dir \\", > "and nova::debug is deprecated and has been moved to nova::logging class, please set them there.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 561]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Unknown variable: '::nova::vncproxy::host'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:31:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_protocol'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:36:5", > "Warning: Unknown variable: '::nova::vncproxy::port'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:41:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_path'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:46:5", > "Warning: Unknown variable: '::nova::compute::pci_passthrough'. at /etc/puppet/modules/nova/manifests/compute/pci.pp:19:38", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/compute/libvirt.pp\", 278]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute/libvirt.pp\", 33]", > " with Stdlib::Compat::Ip_Address. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Exec[set libvirt sasl credentials](provider=posix): Cannot understand environment setting \"TLS_PASSWORD=\"", > "+ rsync_srcs+=' /var/lib/nova/.ssh'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/nova/.ssh /var/lib/config-data/nova_libvirt", > "++ stat -c %y /var/lib/config-data/nova_libvirt.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:47.231423816 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_libvirt", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_libvirt", > "++ find /etc /root /opt /var/spool/cron /var/lib/nova/.ssh -newer /var/lib/config-data/nova_libvirt.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/nova_libvirt", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/nova_libvirt --mtime=1970-01-01", > "2018-10-02 12:36:07,993 INFO: 18773 -- Removing container: docker-puppet-nova_libvirt", > "2018-10-02 12:36:08,036 DEBUG: 18773 -- docker-puppet-nova_libvirt", > "2018-10-02 12:36:08,037 INFO: 18773 -- Finished processing puppet configs for nova_libvirt", > "2018-10-02 12:36:08,037 DEBUG: 18771 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-10-02 12:36:08,038 DEBUG: 18771 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-10-02 12:36:08,040 DEBUG: 18771 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:36:08,041 DEBUG: 18771 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:36:08,041 DEBUG: 18771 -- Updating config hash for neutron_ovs_bridge, config_volume=iscsid hash=686c9b8fc68bcc73c58cf7b174a3e825", > "2018-10-02 12:36:08,041 DEBUG: 18771 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-10-02 12:36:08,041 DEBUG: 18771 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-10-02 12:36:08,041 DEBUG: 18771 -- Updating config hash for nova_libvirt, config_volume=iscsid hash=89c234429bc735a1020fb7875463ebad", > "2018-10-02 12:36:08,041 DEBUG: 18771 -- Updating config hash for nova_virtlogd, config_volume=iscsid hash=89c234429bc735a1020fb7875463ebad", > "2018-10-02 12:36:08,042 DEBUG: 18771 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 12:36:08,043 DEBUG: 18771 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 12:36:08,043 DEBUG: 18771 -- Updating config hash for ceilometer_agent_compute, config_volume=iscsid hash=788062ed6d1e9d8ff9b5fe8d066b2fd6", > "2018-10-02 12:36:08,043 DEBUG: 18771 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt/etc", > "2018-10-02 12:36:08,043 DEBUG: 18771 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:36:08,043 DEBUG: 18771 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:36:08,043 DEBUG: 18771 -- Updating config hash for neutron_ovs_agent, config_volume=iscsid hash=686c9b8fc68bcc73c58cf7b174a3e825", > "2018-10-02 12:36:08,043 DEBUG: 18771 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-10-02 12:36:08,043 DEBUG: 18771 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-10-02 12:36:08,043 DEBUG: 18771 -- Updating config hash for nova_migration_target, config_volume=iscsid hash=89c234429bc735a1020fb7875463ebad", > "2018-10-02 12:36:08,044 DEBUG: 18771 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-10-02 12:36:08,044 DEBUG: 18771 -- Updating config hash for nova_compute, config_volume=iscsid hash=89c234429bc735a1020fb7875463ebad", > "2018-10-02 12:36:08,044 DEBUG: 18771 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 12:36:08,044 DEBUG: 18771 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 12:36:08,044 DEBUG: 18771 -- Updating config hash for logrotate_crond, config_volume=iscsid hash=6f2a5e23a896d70ebbc2c66d87cd9266" > ] >} >2018-10-02 08:38:10,204 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-10-02 12:35:10,155 INFO: 28744 -- Running docker-puppet", > "2018-10-02 12:35:10,156 DEBUG: 28744 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-10-02 12:35:10,156 DEBUG: 28744 -- config_volume aodh", > "2018-10-02 12:35:10,156 DEBUG: 28744 -- puppet_tags aodh_api_paste_ini,aodh_config", > "2018-10-02 12:35:10,156 DEBUG: 28744 -- manifest include tripleo::profile::base::aodh::api", > "", > "include ::tripleo::profile::base::database::mysql::client", > "2018-10-02 12:35:10,157 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 12:35:10,157 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,157 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,157 DEBUG: 28744 -- config_volume aodh", > "2018-10-02 12:35:10,157 DEBUG: 28744 -- puppet_tags aodh_config", > "2018-10-02 12:35:10,157 DEBUG: 28744 -- manifest include tripleo::profile::base::aodh::evaluator", > "2018-10-02 12:35:10,157 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,157 DEBUG: 28744 -- manifest include tripleo::profile::base::aodh::listener", > "2018-10-02 12:35:10,157 DEBUG: 28744 -- manifest include tripleo::profile::base::aodh::notifier", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- config_volume ceilometer", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- puppet_tags ceilometer_config", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- manifest include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- config_volume cinder", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- puppet_tags cinder_config,cinder_type,file,concat,file_line", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- manifest include ::tripleo::profile::base::cinder::api", > "2018-10-02 12:35:10,158 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- puppet_tags cinder_config,file,concat,file_line", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- manifest include ::tripleo::profile::base::cinder::backup::ceph", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- config_volume cinder", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- manifest include ::tripleo::profile::base::cinder::scheduler", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- manifest include ::tripleo::profile::base::lvm", > "include ::tripleo::profile::base::cinder::volume", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- config_volume clustercheck", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- puppet_tags file", > "2018-10-02 12:35:10,159 DEBUG: 28744 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- config_volume glance_api", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- puppet_tags glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- manifest include ::tripleo::profile::base::glance::api", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- config_volume gnocchi", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- puppet_tags gnocchi_api_paste_ini,gnocchi_config", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- manifest include ::tripleo::profile::base::gnocchi::api", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- puppet_tags gnocchi_config", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- manifest include ::tripleo::profile::base::gnocchi::metricd", > "2018-10-02 12:35:10,160 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- config_volume gnocchi", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- puppet_tags gnocchi_config", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- manifest include ::tripleo::profile::base::gnocchi::statsd", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- config_volume haproxy", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- puppet_tags haproxy_config", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}", > "['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::pacemaker::haproxy_bundle", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- config_volume heat_api", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- puppet_tags heat_config,file,concat,file_line", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- manifest include ::tripleo::profile::base::heat::api", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:35:10,161 DEBUG: 28744 -- config_volume heat_api_cfn", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- puppet_tags heat_config,file,concat,file_line", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- config_volume heat", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- config_volume horizon", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- puppet_tags horizon_config", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- manifest include ::tripleo::profile::base::horizon", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- config_volume iscsid", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- puppet_tags iscsid_config", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- manifest include ::tripleo::profile::base::iscsid", > "2018-10-02 12:35:10,162 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- config_volume keystone", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- puppet_tags keystone_config,keystone_domain_config", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::keystone", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- config_volume memcached", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- puppet_tags file", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- manifest include ::tripleo::profile::base::memcached", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- config_volume mysql", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "exec {'wait-for-settle': command => '/bin/true' }", > "include ::tripleo::profile::pacemaker::database::mysql_bundle", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:35:10,163 DEBUG: 28744 -- config_volume neutron", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- puppet_tags neutron_config,neutron_api_config", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- manifest include tripleo::profile::base::neutron::server", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- config_volume neutron", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- puppet_tags neutron_plugin_ml2", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- puppet_tags neutron_config,neutron_dhcp_agent_config", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- manifest include tripleo::profile::base::neutron::dhcp", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- puppet_tags neutron_config,neutron_l3_agent_config", > "2018-10-02 12:35:10,164 DEBUG: 28744 -- manifest include tripleo::profile::base::neutron::l3", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- config_volume neutron", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- puppet_tags neutron_config,neutron_metadata_agent_config", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- manifest include tripleo::profile::base::neutron::metadata", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- config_volume nova", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- puppet_tags nova_config", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::api", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 12:35:10,165 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- manifest include tripleo::profile::base::nova::conductor", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- config_volume nova", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- puppet_tags nova_config", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- manifest include tripleo::profile::base::nova::consoleauth", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- config_volume nova_placement", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- manifest include tripleo::profile::base::nova::placement", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,166 DEBUG: 28744 -- manifest include tripleo::profile::base::nova::scheduler", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- config_volume nova", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- puppet_tags nova_config", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- manifest include tripleo::profile::base::nova::vncproxy", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- config_volume crond", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- puppet_tags ", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- config_volume panko", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- puppet_tags panko_api_paste_ini,panko_config", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- manifest include tripleo::profile::base::panko::api", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- config_volume rabbitmq", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- puppet_tags file", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::rabbitmq", > "2018-10-02 12:35:10,167 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- config_volume redis", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- puppet_tags exec", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- config_volume sahara", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- puppet_tags sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- manifest include ::tripleo::profile::base::sahara::api", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- puppet_tags sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- manifest include ::tripleo::profile::base::sahara::engine", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- config_volume swift", > "2018-10-02 12:35:10,168 DEBUG: 28744 -- puppet_tags swift_config,swift_proxy_config,swift_keymaster_config", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- manifest include ::tripleo::profile::base::swift::proxy", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- volumes []", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- Adding new service", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- config_volume swift_ringbuilder", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- puppet_tags exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- config_volume swift", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- puppet_tags swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- manifest include ::tripleo::profile::base::swift::storage", > "class xinetd() {}", > "2018-10-02 12:35:10,169 DEBUG: 28744 -- Existing service, appending puppet tags and manifest", > "2018-10-02 12:35:10,169 INFO: 28744 -- Service compilation completed.", > "2018-10-02 12:35:10,170 DEBUG: 28744 -- CHECK_MODE: 0", > "2018-10-02 12:35:10,170 DEBUG: 28744 -- - [u'nova_placement', u'file,file_line,concat,augeas,cron,nova_config', u'include tripleo::profile::base::nova::placement\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,170 DEBUG: 28744 -- - [u'aodh', u'file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config', u'include tripleo::profile::base::aodh::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::evaluator\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::listener\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::notifier\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,170 DEBUG: 28744 -- - [u'heat_api', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,170 DEBUG: 28744 -- - [u'swift_ringbuilder', u'file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', u'include ::tripleo::profile::base::swift::ringbuilder', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,170 DEBUG: 28744 -- - [u'sahara', u'file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template', u'include ::tripleo::profile::base::sahara::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sahara::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,170 DEBUG: 28744 -- - [u'mysql', u'file,file_line,concat,augeas,cron,file', u\"['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\\nexec {'wait-for-settle': command => '/bin/true' }\\ninclude ::tripleo::profile::pacemaker::database::mysql_bundle\", u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,170 DEBUG: 28744 -- - [u'gnocchi', u'file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config', u'include ::tripleo::profile::base::gnocchi::api\\n\\ninclude ::tripleo::profile::base::gnocchi::metricd\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::gnocchi::statsd\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,170 DEBUG: 28744 -- - [u'clustercheck', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::pacemaker::clustercheck', u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,170 DEBUG: 28744 -- - [u'redis', u'file,file_line,concat,augeas,cron,exec', u'include ::tripleo::profile::pacemaker::database::redis_bundle', u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'nova', u'file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config', u\"['Nova_cell_v2'].each |String $val| { noop_resource($val) }\\ninclude tripleo::profile::base::nova::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::conductor\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::consoleauth\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::vncproxy\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1', [u'/etc/iscsi:/etc/iscsi'], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'glance_api', u'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config', u'include ::tripleo::profile::base::glance::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'keystone', u'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', u\"['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::keystone\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'memcached', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::base::memcached\\n', u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'panko', u'file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config', u'include tripleo::profile::base::panko::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'heat', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'cinder', u'file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', u'include ::tripleo::profile::base::cinder::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::backup::ceph\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::lvm\\ninclude ::tripleo::profile::base::cinder::volume\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'swift', u'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', u'include ::tripleo::profile::base::swift::proxy\\n\\ninclude ::tripleo::profile::base::swift::storage\\n\\nclass xinetd() {}', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'haproxy', u'file,file_line,concat,augeas,cron,haproxy_config', u\"exec {'wait-for-settle': command => '/bin/true' }\\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::pacemaker::haproxy_bundle\", u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1', [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro'], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n\\ninclude ::tripleo::profile::base::ceilometer::agent::notification\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'rabbitmq', u'file,file_line,concat,augeas,cron,file', u\"['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::rabbitmq\\n\", u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include tripleo::profile::base::neutron::server\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude tripleo::profile::base::neutron::dhcp\\n\\ninclude tripleo::profile::base::neutron::l3\\n\\ninclude tripleo::profile::base::neutron::metadata\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'horizon', u'file,file_line,concat,augeas,cron,horizon_config', u'include ::tripleo::profile::base::horizon\\n', u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 DEBUG: 28744 -- - [u'heat_api_cfn', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api_cfn\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1', [], 0]", > "2018-10-02 12:35:10,171 INFO: 28744 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-10-02 12:35:10,183 INFO: 28745 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 12:35:10,183 DEBUG: 28745 -- config_volume nova_placement", > "2018-10-02 12:35:10,184 DEBUG: 28745 -- puppet_tags file,file_line,concat,augeas,cron,nova_config", > "2018-10-02 12:35:10,184 DEBUG: 28745 -- manifest include tripleo::profile::base::nova::placement", > "2018-10-02 12:35:10,183 INFO: 28746 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 12:35:10,184 DEBUG: 28745 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 12:35:10,184 DEBUG: 28745 -- volumes []", > "2018-10-02 12:35:10,184 DEBUG: 28746 -- config_volume swift_ringbuilder", > "2018-10-02 12:35:10,184 DEBUG: 28745 -- check_mode 0", > "2018-10-02 12:35:10,184 DEBUG: 28746 -- puppet_tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-10-02 12:35:10,184 DEBUG: 28746 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-10-02 12:35:10,184 DEBUG: 28746 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 12:35:10,184 DEBUG: 28746 -- volumes []", > "2018-10-02 12:35:10,184 DEBUG: 28746 -- check_mode 0", > "2018-10-02 12:35:10,184 INFO: 28747 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 12:35:10,184 DEBUG: 28747 -- config_volume gnocchi", > "2018-10-02 12:35:10,184 DEBUG: 28747 -- puppet_tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config", > "2018-10-02 12:35:10,184 DEBUG: 28747 -- manifest include ::tripleo::profile::base::gnocchi::api", > "include ::tripleo::profile::base::gnocchi::metricd", > "include ::tripleo::profile::base::gnocchi::statsd", > "2018-10-02 12:35:10,185 DEBUG: 28747 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 12:35:10,185 DEBUG: 28747 -- volumes []", > "2018-10-02 12:35:10,185 DEBUG: 28747 -- check_mode 0", > "2018-10-02 12:35:10,185 INFO: 28745 -- Removing container: docker-puppet-nova_placement", > "2018-10-02 12:35:10,185 INFO: 28746 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-10-02 12:35:10,186 INFO: 28747 -- Removing container: docker-puppet-gnocchi", > "2018-10-02 12:35:10,271 INFO: 28745 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 12:35:10,271 INFO: 28746 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 12:35:10,272 INFO: 28747 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 12:35:29,656 DEBUG: 28746 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "0f4899fadd7f: Pulling fs layer", > "ff59208988ad: Pulling fs layer", > "119515329f22: Pulling fs layer", > "9f313d6fc73a: Pulling fs layer", > "ff59208988ad: Waiting", > "119515329f22: Waiting", > "9f313d6fc73a: Waiting", > "e17262bc2341: Verifying Checksum", > "e17262bc2341: Download complete", > "ff59208988ad: Verifying Checksum", > "ff59208988ad: Download complete", > "119515329f22: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "9f313d6fc73a: Verifying Checksum", > "0f4899fadd7f: Verifying Checksum", > "0f4899fadd7f: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "0f4899fadd7f: Pull complete", > "ff59208988ad: Pull complete", > "119515329f22: Pull complete", > "9f313d6fc73a: Pull complete", > "Digest: sha256:89819121606959e49721d100f1917a0698f37b8740a2f740eb6f20af29b481a8", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 12:35:29,660 DEBUG: 28746 -- NET_HOST enabled", > "2018-10-02 12:35:29,660 DEBUG: 28746 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift_ringbuilder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball --env NAME=swift_ringbuilder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpDFxKAo:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 12:35:32,071 DEBUG: 28747 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-api", > "d0a704666261: Pulling fs layer", > "4df40fae1310: Pulling fs layer", > "d0a704666261: Waiting", > "4df40fae1310: Waiting", > "4df40fae1310: Verifying Checksum", > "4df40fae1310: Download complete", > "d0a704666261: Verifying Checksum", > "d0a704666261: Download complete", > "d0a704666261: Pull complete", > "4df40fae1310: Pull complete", > "Digest: sha256:a9c992ecf6a590d2d549ef59ef724604638a1918b26690ca0205ca6caf15c60b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 12:35:32,075 DEBUG: 28747 -- NET_HOST enabled", > "2018-10-02 12:35:32,075 DEBUG: 28747 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-gnocchi --env PUPPET_TAGS=file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config --env NAME=gnocchi --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp3XjXcz:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 12:35:34,523 DEBUG: 28745 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-placement-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-placement-api", > "9e28a9d49d0f: Pulling fs layer", > "99145198ab24: Pulling fs layer", > "9e28a9d49d0f: Waiting", > "99145198ab24: Waiting", > "99145198ab24: Verifying Checksum", > "99145198ab24: Download complete", > "9e28a9d49d0f: Verifying Checksum", > "9e28a9d49d0f: Download complete", > "9e28a9d49d0f: Pull complete", > "99145198ab24: Pull complete", > "Digest: sha256:c8ad6dd93c095f7dc983f168d49fb64b51a827836b1522e9c06a5335ebdc70a4", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 12:35:34,526 DEBUG: 28745 -- NET_HOST enabled", > "2018-10-02 12:35:34,527 DEBUG: 28745 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_placement --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config --env NAME=nova_placement --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpdvfQqm:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 12:35:44,872 DEBUG: 28746 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.21 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[fetch_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'", > "Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.31:%PORT%/d1]/Ring_object_device[172.17.4.31:6000/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.31:%PORT%/d1]/Ring_container_device[172.17.4.31:6001/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.31:%PORT%/d1]/Ring_account_device[172.17.4.31:6002/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[upload_swift_ring_tarball]: Triggered 'refresh' from 2 events", > "Notice: Applied catalog in 4.75 seconds", > "Changes:", > " Total: 11", > "Events:", > " Success: 11", > "Resources:", > " Changed: 11", > " Out of sync: 11", > " Skipped: 19", > " Total: 36", > " Restarted: 6", > "Time:", > " File: 0.01", > " Ring container device: 0.56", > " Ring account device: 0.57", > " Ring object device: 0.61", > " Config retrieval: 1.34", > " Exec: 1.50", > " Last run: 1538483743", > " Total: 4.60", > "Version:", > " Config: 1538483737", > " Puppet: 4.8.2", > "Gathering files modified after 2018-10-02 12:35:29.991809801 +0000", > "2018-10-02 12:35:44,873 DEBUG: 28746 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball'", > "+ CHECK_MODE=", > "+ '[' -d /tmp/puppet-check-mode ']'", > "+ origin_of_time=/var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ touch /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/ringbuilder.pp\", 113]:[\"/etc/config.pp\", 2]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/ringbuilder/create.pp\", 44]:", > "Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta", > "Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted", > "Warning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ rsync_srcs+=' /var/www'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift_ringbuilder", > "++ stat -c %y /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:29.991809801 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift_ringbuilder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift_ringbuilder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift_ringbuilder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ EXCLUDE='--exclude=*/etc/swift/backups/* --exclude=*/etc/swift/*.ring.gz --exclude=*/etc/swift/*.builder --exclude=*/etc/libvirt/passwd.db'", > "+ tar xO", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/swift_ringbuilder", > "+ sed '/^#.*HEADER.*/d'", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/swift_ringbuilder --mtime=1970-01-01", > "2018-10-02 12:35:44,873 INFO: 28746 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-10-02 12:35:44,921 DEBUG: 28746 -- docker-puppet-swift_ringbuilder", > "2018-10-02 12:35:44,921 INFO: 28746 -- Finished processing puppet configs for swift_ringbuilder", > "2018-10-02 12:35:44,922 INFO: 28746 -- Starting configuration of sahara using image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 12:35:44,922 DEBUG: 28746 -- config_volume sahara", > "2018-10-02 12:35:44,922 DEBUG: 28746 -- puppet_tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-10-02 12:35:44,922 DEBUG: 28746 -- manifest include ::tripleo::profile::base::sahara::api", > "include ::tripleo::profile::base::sahara::engine", > "2018-10-02 12:35:44,922 DEBUG: 28746 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 12:35:44,922 DEBUG: 28746 -- volumes []", > "2018-10-02 12:35:44,922 DEBUG: 28746 -- check_mode 0", > "2018-10-02 12:35:44,923 INFO: 28746 -- Removing container: docker-puppet-sahara", > "2018-10-02 12:35:44,992 INFO: 28746 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 12:35:46,685 DEBUG: 28747 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.38 seconds", > "Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}9da85e58f3bd6c780ce76db603b7f028'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'", > "Notice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/ensure: defined content as '{md5}2421a3c6df32c7e38c2a7a22afdf5728'", > "Notice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}a045d750d819b1e9dae3fbfb3f20edd5'", > "Notice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'", > "Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{md5}f58b0483b70b4e73b5f67ff37b8f24a0'", > "Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{md5}fa95c477a2085c1f7f17ee5f8eccfb90'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Gnocchi::Db/Gnocchi_config[indexer/url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/auth_mode]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage/Gnocchi_config[storage/coordination_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/redis_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_keyring]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_pool]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_conffile]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/workers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/metric_processing_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/resource_id]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/archive_policy_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/flush_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/allow_methods]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Policy/Oslo::Policy[gnocchi_config]/Gnocchi_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Oslo::Middleware[gnocchi_config]/Gnocchi_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}6b8342ab4f5f558068c1a71a0dd1e894'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}c6d1bc1fdbcb93bbd2596e4703f4108c' to '{md5}3bd0015a5b258bebc53d757643b45830'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'", > "Notice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'", > "Notice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'", > "Notice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'", > "Notice: /Stage[main]/Apache::Mod::Ext_filter/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'", > "Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'", > "Notice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'", > "Notice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'", > "Notice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'", > "Notice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'", > "Notice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'", > "Notice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'", > "Notice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'", > "Notice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'", > "Notice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'", > "Notice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'", > "Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'", > "Notice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'", > "Notice: /Stage[main]/Apache::Mod::Env/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'", > "Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'", > "Notice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'", > "Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{md5}c7726ef20347ef9a06ef68eeaad79765'", > "Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{md5}e282ac9f82fe5538692a4de3616fb695'", > "Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{md5}ab31a6ea611785f74851b578572e4157'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d/httpd.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed", > "Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/content: content changed '{md5}9e163ce201541f8aa36fcc1a372ed34d' to '{md5}b6f6f2773db25c777f1db887e7a3f57d'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-lua.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi.conf]/ensure: removed", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[/var/www/cgi-bin/gnocchi]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[gnocchi_wsgi]/ensure: defined content as '{md5}1001349fa771bd31f137b23418ebcced'", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/Apache::Vhost[gnocchi_wsgi]/Concat[10-gnocchi_wsgi.conf]/File[/etc/httpd/conf.d/10-gnocchi_wsgi.conf]/ensure: defined content as '{md5}e5c0c9cd823f0acd2f6be0c1455f6e4f'", > "Notice: Applied catalog in 1.15 seconds", > " Total: 114", > " Success: 114", > " Changed: 114", > " Out of sync: 114", > " Total: 261", > " Skipped: 43", > " Concat file: 0.00", > " Anchor: 0.00", > " Concat fragment: 0.00", > " Augeas: 0.02", > " Gnocchi config: 0.29", > " File: 0.30", > " Last run: 1538483745", > " Config retrieval: 4.90", > " Total: 5.51", > " Resources: 0.00", > " Config: 1538483738", > "Gathering files modified after 2018-10-02 12:35:32.331822912 +0000", > "2018-10-02 12:35:46,685 DEBUG: 28747 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config'", > "+ origin_of_time=/var/lib/config-data/gnocchi.origin_of_time", > "+ touch /var/lib/config-data/gnocchi.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/db.pp\", 26]:[\"/etc/puppet/modules/gnocchi/manifests/init.pp\", 54]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/config.pp\", 29]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/gnocchi.pp\", 31]", > "Warning: Scope(Class[Gnocchi::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/gnocchi", > "++ stat -c %y /var/lib/config-data/gnocchi.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:32.331822912 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/gnocchi", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/gnocchi", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/gnocchi.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/gnocchi", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/gnocchi --mtime=1970-01-01", > "2018-10-02 12:35:46,685 INFO: 28747 -- Removing container: docker-puppet-gnocchi", > "2018-10-02 12:35:46,750 DEBUG: 28747 -- docker-puppet-gnocchi", > "2018-10-02 12:35:46,750 INFO: 28747 -- Finished processing puppet configs for gnocchi", > "2018-10-02 12:35:46,750 INFO: 28747 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:35:46,750 DEBUG: 28747 -- config_volume clustercheck", > "2018-10-02 12:35:46,750 DEBUG: 28747 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-10-02 12:35:46,750 DEBUG: 28747 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-10-02 12:35:46,750 DEBUG: 28747 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:35:46,750 DEBUG: 28747 -- volumes []", > "2018-10-02 12:35:46,750 DEBUG: 28747 -- check_mode 0", > "2018-10-02 12:35:46,752 INFO: 28747 -- Removing container: docker-puppet-clustercheck", > "2018-10-02 12:35:46,817 INFO: 28747 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:35:47,526 DEBUG: 28746 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-api", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "8699899a971e: Pulling fs layer", > "45d7e459b0ba: Pulling fs layer", > "45d7e459b0ba: Verifying Checksum", > "45d7e459b0ba: Download complete", > "8699899a971e: Download complete", > "8699899a971e: Pull complete", > "45d7e459b0ba: Pull complete", > "Digest: sha256:fde08aa97680215d52c978016470d6ab81eb3896ac0f9a038a7be67515f7ef00", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 12:35:47,530 DEBUG: 28746 -- NET_HOST enabled", > "2018-10-02 12:35:47,530 DEBUG: 28746 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-sahara --env PUPPET_TAGS=file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template --env NAME=sahara --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp4G3gbV:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 12:35:53,693 DEBUG: 28747 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-mariadb ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-mariadb", > "86174678f419: Pulling fs layer", > "86174678f419: Verifying Checksum", > "86174678f419: Download complete", > "86174678f419: Pull complete", > "Digest: sha256:a18df92dad8491aa406a8a5075c976a71c5dff0af8c8ff75f0cb22355cc77f87", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:35:53,697 DEBUG: 28747 -- NET_HOST enabled", > "2018-10-02 12:35:53,697 DEBUG: 28747 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-clustercheck --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=clustercheck --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp6DAE7h:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:35:56,888 DEBUG: 28745 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.97 seconds", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}6f0ae67d7485498c85e340b253429e98'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/File[/etc/httpd/conf.d/00-nova-placement-api.conf]/content: content changed '{md5}611e31d39e1635bfabc0aafc51b43d0b' to '{md5}612d455490cfecc4b51db6656ea39240'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{md5}2c992c50344eb1765282cb9fb70126db'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{md5}d7263437cadb7bca0a50850c091b7547'", > "Notice: Applied catalog in 7.95 seconds", > " Total: 132", > " Success: 132", > " Changed: 132", > " Out of sync: 132", > " Total: 375", > " Skipped: 39", > " Package: 0.11", > " File: 0.50", > " Total: 12.80", > " Last run: 1538483754", > " Config retrieval: 5.59", > " Nova config: 6.58", > " Config: 1538483741", > "Gathering files modified after 2018-10-02 12:35:34.725836238 +0000", > "2018-10-02 12:35:56,888 DEBUG: 28745 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova_placement.origin_of_time", > "+ touch /var/lib/config-data/nova_placement.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 551]", > "Warning: Scope(Class[Nova]): nova::use_syslog, nova::use_stderr, nova::log_facility, nova::log_dir \\", > "and nova::debug is deprecated and has been moved to nova::logging class, please set them there.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 561]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Scope(Class[Nova::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova_placement", > "++ stat -c %y /var/lib/config-data/nova_placement.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:34.725836238 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_placement", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_placement", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova_placement.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/nova_placement", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/nova_placement --mtime=1970-01-01", > "2018-10-02 12:35:56,888 INFO: 28745 -- Removing container: docker-puppet-nova_placement", > "2018-10-02 12:35:56,946 DEBUG: 28745 -- docker-puppet-nova_placement", > "2018-10-02 12:35:56,947 INFO: 28745 -- Finished processing puppet configs for nova_placement", > "2018-10-02 12:35:56,947 INFO: 28745 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 12:35:56,947 DEBUG: 28745 -- config_volume aodh", > "2018-10-02 12:35:56,947 DEBUG: 28745 -- puppet_tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config", > "2018-10-02 12:35:56,947 DEBUG: 28745 -- manifest include tripleo::profile::base::aodh::api", > "include tripleo::profile::base::aodh::evaluator", > "include tripleo::profile::base::aodh::listener", > "include tripleo::profile::base::aodh::notifier", > "2018-10-02 12:35:56,947 DEBUG: 28745 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 12:35:56,947 DEBUG: 28745 -- volumes []", > "2018-10-02 12:35:56,947 DEBUG: 28745 -- check_mode 0", > "2018-10-02 12:35:56,949 INFO: 28745 -- Removing container: docker-puppet-aodh", > "2018-10-02 12:35:57,014 INFO: 28745 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 12:35:59,099 DEBUG: 28745 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-api", > "70c8ade901ba: Pulling fs layer", > "e8ae5e32f329: Pulling fs layer", > "e8ae5e32f329: Verifying Checksum", > "e8ae5e32f329: Download complete", > "70c8ade901ba: Verifying Checksum", > "70c8ade901ba: Download complete", > "70c8ade901ba: Pull complete", > "e8ae5e32f329: Pull complete", > "Digest: sha256:7cb294078a56b5adb50320b21f0f4d9dad0d2dc096d2f2b346ee686861589a46", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 12:35:59,102 DEBUG: 28745 -- NET_HOST enabled", > "2018-10-02 12:35:59,102 DEBUG: 28745 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-aodh --env PUPPET_TAGS=file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config --env NAME=aodh --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpar1qN2:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 12:35:59,975 DEBUG: 28746 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.24 seconds", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/plugins]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/port]/ensure: created", > "Notice: /Stage[main]/Sahara::Service::Api/Sahara_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Policy/Oslo::Policy[sahara_config]/Sahara_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Default[sahara_config]/Sahara_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Rabbit[sahara_config]/Sahara_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Zmq[sahara_config]/Sahara_config[DEFAULT/rpc_zmq_host]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 1.47 seconds", > " Total: 25", > " Success: 25", > " Total: 197", > " Skipped: 23", > " Out of sync: 25", > " Changed: 25", > " File: 0.00", > " Package: 0.05", > " Sahara config: 1.13", > " Last run: 1538483758", > " Config retrieval: 2.53", > " Total: 3.73", > " Config: 1538483754", > "Gathering files modified after 2018-10-02 12:35:47.764906604 +0000", > "2018-10-02 12:35:59,975 DEBUG: 28746 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template'", > "+ origin_of_time=/var/lib/config-data/sahara.origin_of_time", > "+ touch /var/lib/config-data/sahara.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template /etc/config.pp", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/db.pp\", 69]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 380]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 381]", > "Warning: Scope(Class[Sahara]): The use_neutron parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Sahara]): sahara::admin_user, sahara::admin_password, sahara::auth_uri, sahara::identity_uri, sahara::admin_tenant_name and sahara::memcached_servers are deprecated. Please use sahara::keystone::authtoken::* parameters instead.", > "Warning: Scope(Class[Sahara::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/sahara", > "++ stat -c %y /var/lib/config-data/sahara.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:47.764906604 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/sahara", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/sahara", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/sahara.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/sahara", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/sahara --mtime=1970-01-01", > "2018-10-02 12:35:59,975 INFO: 28746 -- Removing container: docker-puppet-sahara", > "2018-10-02 12:36:00,018 DEBUG: 28746 -- docker-puppet-sahara", > "2018-10-02 12:36:00,018 INFO: 28746 -- Finished processing puppet configs for sahara", > "2018-10-02 12:36:00,018 INFO: 28746 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:36:00,018 DEBUG: 28746 -- config_volume mysql", > "2018-10-02 12:36:00,018 DEBUG: 28746 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-10-02 12:36:00,019 DEBUG: 28746 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "2018-10-02 12:36:00,019 DEBUG: 28746 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:36:00,019 DEBUG: 28746 -- volumes []", > "2018-10-02 12:36:00,019 DEBUG: 28746 -- check_mode 0", > "2018-10-02 12:36:00,020 INFO: 28746 -- Removing container: docker-puppet-mysql", > "2018-10-02 12:36:00,075 INFO: 28746 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:36:00,078 DEBUG: 28746 -- NET_HOST enabled", > "2018-10-02 12:36:00,078 DEBUG: 28746 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-mysql --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=mysql --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp6u2G0P:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 12:36:01,954 DEBUG: 28747 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.49 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}1bc5e3299c4a59a964cc16e21cad1919'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}7d37008224e71625019cb48768f267e7'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/Xinetd::Service[galera-monitor]/File[/etc/xinetd.d/galera-monitor]/ensure: defined content as '{md5}7a618346b5acb5a43fd5dc4fa4897cc7'", > "Notice: Applied catalog in 0.04 seconds", > " Total: 4", > " Success: 4", > " Total: 13", > " Out of sync: 3", > " Changed: 3", > " Skipped: 9", > " File: 0.02", > " Config retrieval: 0.60", > " Total: 0.63", > " Last run: 1538483761", > " Config: 1538483760", > "Gathering files modified after 2018-10-02 12:35:53.918938654 +0000", > "2018-10-02 12:36:01,954 DEBUG: 28747 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,file ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,file'", > "+ origin_of_time=/var/lib/config-data/clustercheck.origin_of_time", > "+ touch /var/lib/config-data/clustercheck.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,file /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/clustercheck", > "++ stat -c %y /var/lib/config-data/clustercheck.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:53.918938654 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/clustercheck", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/clustercheck", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/clustercheck.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/clustercheck", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/clustercheck --mtime=1970-01-01", > "2018-10-02 12:36:01,954 INFO: 28747 -- Removing container: docker-puppet-clustercheck", > "2018-10-02 12:36:02,000 DEBUG: 28747 -- docker-puppet-clustercheck", > "2018-10-02 12:36:02,000 INFO: 28747 -- Finished processing puppet configs for clustercheck", > "2018-10-02 12:36:02,000 INFO: 28747 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 12:36:02,000 DEBUG: 28747 -- config_volume redis", > "2018-10-02 12:36:02,000 DEBUG: 28747 -- puppet_tags file,file_line,concat,augeas,cron,exec", > "2018-10-02 12:36:02,000 DEBUG: 28747 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-10-02 12:36:02,000 DEBUG: 28747 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 12:36:02,001 DEBUG: 28747 -- volumes []", > "2018-10-02 12:36:02,001 DEBUG: 28747 -- check_mode 0", > "2018-10-02 12:36:02,002 INFO: 28747 -- Removing container: docker-puppet-redis", > "2018-10-02 12:36:02,066 INFO: 28747 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 12:36:05,672 DEBUG: 28747 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-redis ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-redis", > "b76c66c936ee: Pulling fs layer", > "edac33389285: Pulling fs layer", > "b76c66c936ee: Verifying Checksum", > "b76c66c936ee: Download complete", > "b76c66c936ee: Pull complete", > "edac33389285: Verifying Checksum", > "edac33389285: Download complete", > "edac33389285: Pull complete", > "Digest: sha256:8e75aa16fb47a7f685c996ceb37a84a6316a68a11a07f1c66b48117600612b2e", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 12:36:05,675 DEBUG: 28747 -- NET_HOST enabled", > "2018-10-02 12:36:05,676 DEBUG: 28747 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-redis --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec --env NAME=redis --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpB_Hf1T:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 12:36:13,170 DEBUG: 28746 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.60 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}a730a65a0efef3097d49f2084ff2db3e'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}76a4e05ad880b930b43fc47f1d505711'", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}be8dfdd5a4076d5f39de0ce6aecd87bf'", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Notice: Applied catalog in 0.36 seconds", > " Skipped: 225", > " Total: 230", > " Out of sync: 4", > " Changed: 4", > " File: 0.03", > " Last run: 1538483772", > " Config retrieval: 4.98", > " Total: 5.00", > " Config: 1538483766", > "Gathering files modified after 2018-10-02 12:36:00.299971089 +0000", > "2018-10-02 12:36:13,170 DEBUG: 28746 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/mysql.origin_of_time", > "+ touch /var/lib/config-data/mysql.origin_of_time", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 57]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/mysql", > "++ stat -c %y /var/lib/config-data/mysql.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:00.299971089 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/mysql", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/mysql", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/mysql.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/mysql", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/mysql --mtime=1970-01-01", > "2018-10-02 12:36:13,170 INFO: 28746 -- Removing container: docker-puppet-mysql", > "2018-10-02 12:36:13,211 DEBUG: 28746 -- docker-puppet-mysql", > "2018-10-02 12:36:13,212 INFO: 28746 -- Finished processing puppet configs for mysql", > "2018-10-02 12:36:13,212 INFO: 28746 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 12:36:13,212 DEBUG: 28746 -- config_volume nova", > "2018-10-02 12:36:13,212 DEBUG: 28746 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config", > "2018-10-02 12:36:13,212 DEBUG: 28746 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::conductor", > "include tripleo::profile::base::nova::consoleauth", > "include tripleo::profile::base::nova::scheduler", > "include tripleo::profile::base::nova::vncproxy", > "2018-10-02 12:36:13,212 DEBUG: 28746 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 12:36:13,213 DEBUG: 28746 -- volumes []", > "2018-10-02 12:36:13,213 DEBUG: 28746 -- check_mode 0", > "2018-10-02 12:36:13,214 INFO: 28746 -- Removing container: docker-puppet-nova", > "2018-10-02 12:36:13,277 INFO: 28746 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 12:36:14,564 DEBUG: 28747 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.18 seconds", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}be99a9a28fde3a84874841df38523dcd'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Triggered 'refresh' from 1 events", > "Notice: Applied catalog in 0.07 seconds", > " Total: 6", > " Success: 6", > " Restarted: 1", > " Skipped: 11", > " Total: 21", > " Out of sync: 6", > " Changed: 6", > " Exec: 0.00", > " Augeas: 0.01", > " Config retrieval: 1.32", > " Total: 1.34", > " Last run: 1538483773", > " Config: 1538483772", > "Gathering files modified after 2018-10-02 12:36:05.919998999 +0000", > "2018-10-02 12:36:14,564 DEBUG: 28747 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,exec ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec'", > "+ origin_of_time=/var/lib/config-data/redis.origin_of_time", > "+ touch /var/lib/config-data/redis.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec /etc/config.pp", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/redis", > "++ stat -c %y /var/lib/config-data/redis.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:05.919998999 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/redis", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/redis", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/redis.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/redis", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/redis --mtime=1970-01-01", > "2018-10-02 12:36:14,565 INFO: 28747 -- Removing container: docker-puppet-redis", > "2018-10-02 12:36:14,605 DEBUG: 28747 -- docker-puppet-redis", > "2018-10-02 12:36:14,605 INFO: 28747 -- Finished processing puppet configs for redis", > "2018-10-02 12:36:14,606 INFO: 28747 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 12:36:14,606 DEBUG: 28747 -- config_volume keystone", > "2018-10-02 12:36:14,606 DEBUG: 28747 -- puppet_tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config", > "2018-10-02 12:36:14,606 DEBUG: 28747 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "2018-10-02 12:36:14,606 DEBUG: 28747 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 12:36:14,606 DEBUG: 28747 -- volumes []", > "2018-10-02 12:36:14,606 DEBUG: 28747 -- check_mode 0", > "2018-10-02 12:36:14,607 INFO: 28747 -- Removing container: docker-puppet-keystone", > "2018-10-02 12:36:14,654 DEBUG: 28745 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.30 seconds", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/gnocchi_external_project_owner]/ensure: created", > "Notice: /Stage[main]/Aodh::Evaluator/Aodh_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Db/Oslo::Db[aodh_config]/Aodh_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Rabbit[aodh_config]/Aodh_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Default[aodh_config]/Aodh_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Policy/Oslo::Policy[aodh_config]/Aodh_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Oslo::Middleware[aodh_config]/Aodh_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}a15e1cb850d23ef94049e1c9cb47ddab'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/owner: owner changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/group: group changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[aodh_wsgi]/ensure: defined content as '{md5}09d823939c45501c11f2096289fe70cf'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/Apache::Vhost[aodh_wsgi]/Concat[10-aodh_wsgi.conf]/File[/etc/httpd/conf.d/10-aodh_wsgi.conf]/ensure: defined content as '{md5}d4f660b7af364a2f5d58f7ed1666e1a1'", > "Notice: Applied catalog in 1.93 seconds", > " Total: 110", > " Success: 110", > " Changed: 109", > " Out of sync: 109", > " Total: 329", > " Skipped: 40", > " File: 0.38", > " Aodh config: 0.81", > " Config retrieval: 4.83", > " Total: 6.10", > "Gathering files modified after 2018-10-02 12:35:59.361966393 +0000", > "2018-10-02 12:36:14,654 DEBUG: 28745 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config'", > "+ origin_of_time=/var/lib/config-data/aodh.origin_of_time", > "+ touch /var/lib/config-data/aodh.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config /etc/config.pp", > "Warning: Unknown variable: 'undef'. at /etc/puppet/modules/aodh/manifests/init.pp:290:41", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/aodh.pp\", 123]", > "Warning: Scope(Class[Aodh::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Scope(Class[Aodh::Api]): host has no effect as of Newton and will be removed in a future \\", > "release. aodh::wsgi::apache supports setting a host via bind_host.", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/oslo/manifests/db.pp\", 132]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/aodh", > "++ stat -c %y /var/lib/config-data/aodh.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:35:59.361966393 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/aodh", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/aodh", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/aodh.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/aodh", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/aodh --mtime=1970-01-01", > "2018-10-02 12:36:14,654 INFO: 28745 -- Removing container: docker-puppet-aodh", > "2018-10-02 12:36:14,696 INFO: 28747 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 12:36:14,700 DEBUG: 28745 -- docker-puppet-aodh", > "2018-10-02 12:36:14,700 INFO: 28745 -- Finished processing puppet configs for aodh", > "2018-10-02 12:36:14,701 INFO: 28745 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:36:14,701 DEBUG: 28745 -- config_volume heat_api", > "2018-10-02 12:36:14,701 DEBUG: 28745 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-10-02 12:36:14,701 DEBUG: 28745 -- manifest include ::tripleo::profile::base::heat::api", > "2018-10-02 12:36:14,701 DEBUG: 28745 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:36:14,701 DEBUG: 28745 -- volumes []", > "2018-10-02 12:36:14,701 DEBUG: 28745 -- check_mode 0", > "2018-10-02 12:36:14,702 INFO: 28745 -- Removing container: docker-puppet-heat_api", > "2018-10-02 12:36:14,780 INFO: 28745 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:36:16,624 DEBUG: 28746 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-api", > "9e28a9d49d0f: Already exists", > "73c834b98c25: Pulling fs layer", > "73c834b98c25: Verifying Checksum", > "73c834b98c25: Download complete", > "73c834b98c25: Pull complete", > "Digest: sha256:0e5b7e3cf3455a72f25bf23e2d3e15f27add32743545241aa8a5bfd77559bf24", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 12:36:16,628 DEBUG: 28746 -- NET_HOST enabled", > "2018-10-02 12:36:16,628 DEBUG: 28746 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config --env NAME=nova --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpVz13Wp:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 12:36:17,395 DEBUG: 28745 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api", > "d1bf34aac9d8: Pulling fs layer", > "1075fd166a56: Pulling fs layer", > "1075fd166a56: Verifying Checksum", > "1075fd166a56: Download complete", > "d1bf34aac9d8: Verifying Checksum", > "d1bf34aac9d8: Download complete", > "d1bf34aac9d8: Pull complete", > "1075fd166a56: Pull complete", > "Digest: sha256:e59baeac763341b8b2bab7f2bfbc4548e3ae4f38bc44046eb338d52d8eabf102", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:36:17,396 DEBUG: 28747 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-keystone ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-keystone", > "3bcc3bbd3f17: Pulling fs layer", > "016b47c04c8c: Pulling fs layer", > "016b47c04c8c: Verifying Checksum", > "016b47c04c8c: Download complete", > "3bcc3bbd3f17: Verifying Checksum", > "3bcc3bbd3f17: Download complete", > "3bcc3bbd3f17: Pull complete", > "016b47c04c8c: Pull complete", > "Digest: sha256:b8a47f5ce80ead2c8816fa3b237a5130565a3aea7bf0be3269d3c9d7867aff62", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 12:36:17,399 DEBUG: 28745 -- NET_HOST enabled", > "2018-10-02 12:36:17,399 DEBUG: 28745 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpVyMQY2:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:36:17,400 DEBUG: 28747 -- NET_HOST enabled", > "2018-10-02 12:36:17,400 DEBUG: 28747 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-keystone --env PUPPET_TAGS=file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config --env NAME=keystone --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmptKQuuH:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 12:36:32,696 DEBUG: 28745 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.24 seconds", > "Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[clients_keystone/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created", > "Notice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}7c29a76a1604a9283a510402a3598810'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{md5}640891728ce5d46ae40234228561597c'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{md5}50fd8028f717ded87dfb7c398e44dba6'", > "Notice: Applied catalog in 2.51 seconds", > " Total: 121", > " Success: 121", > " Changed: 121", > " Out of sync: 121", > " Skipped: 32", > " Total: 336", > " Cron: 0.01", > " Package: 0.14", > " File: 0.31", > " Heat config: 1.43", > " Last run: 1538483791", > " Config retrieval: 4.73", > " Total: 6.63", > " Config: 1538483783", > "Gathering files modified after 2018-10-02 12:36:17.597055094 +0000", > "2018-10-02 12:36:32,697 DEBUG: 28745 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,heat_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/heat_api.origin_of_time", > "+ touch /var/lib/config-data/heat_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/db.pp\", 75]:[\"/etc/puppet/modules/heat/manifests/init.pp\", 363]", > "Warning: Scope(Class[Heat::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/heat.pp\", 128]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api", > "++ stat -c %y /var/lib/config-data/heat_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:17.597055094 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/heat_api", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/heat_api --mtime=1970-01-01", > "2018-10-02 12:36:32,697 INFO: 28745 -- Removing container: docker-puppet-heat_api", > "2018-10-02 12:36:32,752 DEBUG: 28745 -- docker-puppet-heat_api", > "2018-10-02 12:36:32,753 INFO: 28745 -- Finished processing puppet configs for heat_api", > "2018-10-02 12:36:32,753 INFO: 28745 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:36:32,753 DEBUG: 28745 -- config_volume heat", > "2018-10-02 12:36:32,753 DEBUG: 28745 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-10-02 12:36:32,753 DEBUG: 28745 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-10-02 12:36:32,753 DEBUG: 28745 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:36:32,753 DEBUG: 28745 -- volumes []", > "2018-10-02 12:36:32,753 DEBUG: 28745 -- check_mode 0", > "2018-10-02 12:36:32,755 INFO: 28745 -- Removing container: docker-puppet-heat", > "2018-10-02 12:36:32,808 INFO: 28745 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:36:32,811 DEBUG: 28745 -- NET_HOST enabled", > "2018-10-02 12:36:32,811 DEBUG: 28745 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpAzsq_n:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 12:36:32,928 DEBUG: 28747 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.45 seconds", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/notification_format]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/admin_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/public_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/0]/ensure: defined content as '{md5}174f565d793cba22a7a62ea64aeecbaa'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/1]/ensure: defined content as '{md5}21684b83b8a5b2d63c2032d127edc99b'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/0]/ensure: defined content as '{md5}fcbccf7248c45248286edb723591fd28'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/1]/ensure: defined content as '{md5}bca1121fd3cf5606e490ededb392ad06'", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone::Config/Keystone_config[ec2/driver]/ensure: created", > "Notice: /Stage[main]/Keystone::Cron::Token_flush/Cron[keystone-manage token_flush]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Default[keystone_config]/Keystone_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}4472e3fe1357bfe976828272dbcc21d8'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/File[keystone_wsgi_main]/ensure: defined content as '{md5}072422f0d75777ed1783e6910b3ddc58'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/File[keystone_wsgi_admin]/ensure: defined content as '{md5}d6dda52b0e14d80a652ecf42686d3962'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_mellon.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_openidc.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_mellon.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_openidc.conf]/ensure: removed", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/Apache::Vhost[keystone_wsgi_main]/Concat[10-keystone_wsgi_main.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_main.conf]/ensure: defined content as '{md5}141cf061a2c2d3ce5644a71a85aab5ca'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/Apache::Vhost[keystone_wsgi_admin]/Concat[10-keystone_wsgi_admin.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_admin.conf]/ensure: defined content as '{md5}73d760609906b238e2df64f79a04cb29'", > "Notice: Applied catalog in 2.55 seconds", > " Total: 126", > " Success: 126", > " Changed: 126", > " Out of sync: 126", > " Total: 324", > " Skipped: 34", > " File: 0.25", > " Keystone config: 1.58", > " Config retrieval: 5.02", > " Total: 6.94", > "Gathering files modified after 2018-10-02 12:36:17.615055178 +0000", > "2018-10-02 12:36:32,929 DEBUG: 28747 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config'", > "+ origin_of_time=/var/lib/config-data/keystone.origin_of_time", > "+ touch /var/lib/config-data/keystone.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/keystone/manifests/init.pp\", 757]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 760]:[\"/etc/config.pp\", 3]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 1108]:[\"/etc/config.pp\", 3]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/keystone", > "++ stat -c %y /var/lib/config-data/keystone.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:17.615055178 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/keystone", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/keystone", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/keystone.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/keystone", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/keystone --mtime=1970-01-01", > "2018-10-02 12:36:32,929 INFO: 28747 -- Removing container: docker-puppet-keystone", > "2018-10-02 12:36:32,984 DEBUG: 28747 -- docker-puppet-keystone", > "2018-10-02 12:36:32,985 INFO: 28747 -- Finished processing puppet configs for keystone", > "2018-10-02 12:36:32,985 INFO: 28747 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 12:36:32,985 DEBUG: 28747 -- config_volume memcached", > "2018-10-02 12:36:32,985 DEBUG: 28747 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-10-02 12:36:32,985 DEBUG: 28747 -- manifest include ::tripleo::profile::base::memcached", > "2018-10-02 12:36:32,985 DEBUG: 28747 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 12:36:32,985 DEBUG: 28747 -- volumes []", > "2018-10-02 12:36:32,985 DEBUG: 28747 -- check_mode 0", > "2018-10-02 12:36:32,987 INFO: 28747 -- Removing container: docker-puppet-memcached", > "2018-10-02 12:36:33,057 INFO: 28747 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 12:36:34,561 DEBUG: 28747 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-memcached ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-memcached", > "13f6871ba653: Pulling fs layer", > "13f6871ba653: Verifying Checksum", > "13f6871ba653: Download complete", > "13f6871ba653: Pull complete", > "Digest: sha256:b85a55179015e133b7b42af8fad710e1b8f960cf126d9fef1750a2af97c849ab", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 12:36:34,564 DEBUG: 28747 -- NET_HOST enabled", > "2018-10-02 12:36:34,564 DEBUG: 28747 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-memcached --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=memcached --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpGWv2OX:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 12:36:42,113 DEBUG: 28747 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.60 seconds", > "Notice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}a50ed62e82d31fb4cb2de2226650c545' to '{md5}c2e297e8986e3b089f0b0239f98143d8'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d/memcached.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > " Total: 3", > " Success: 3", > " Skipped: 10", > " Config retrieval: 0.71", > " Total: 0.73", > " Last run: 1538483801", > " Config: 1538483800", > "Gathering files modified after 2018-10-02 12:36:34.769132958 +0000", > "2018-10-02 12:36:42,113 DEBUG: 28747 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/memcached.origin_of_time", > "+ touch /var/lib/config-data/memcached.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/memcached", > "++ stat -c %y /var/lib/config-data/memcached.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:34.769132958 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/memcached", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/memcached", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/memcached.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/memcached", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/memcached --mtime=1970-01-01", > "2018-10-02 12:36:42,113 INFO: 28747 -- Removing container: docker-puppet-memcached", > "2018-10-02 12:36:42,152 DEBUG: 28747 -- docker-puppet-memcached", > "2018-10-02 12:36:42,153 INFO: 28747 -- Finished processing puppet configs for memcached", > "2018-10-02 12:36:42,153 INFO: 28747 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 12:36:42,153 DEBUG: 28747 -- config_volume panko", > "2018-10-02 12:36:42,153 DEBUG: 28747 -- puppet_tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config", > "2018-10-02 12:36:42,153 DEBUG: 28747 -- manifest include tripleo::profile::base::panko::api", > "2018-10-02 12:36:42,153 DEBUG: 28747 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 12:36:42,153 DEBUG: 28747 -- volumes []", > "2018-10-02 12:36:42,153 DEBUG: 28747 -- check_mode 0", > "2018-10-02 12:36:42,154 INFO: 28747 -- Removing container: docker-puppet-panko", > "2018-10-02 12:36:42,219 INFO: 28747 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 12:36:43,459 DEBUG: 28746 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 5.14 seconds", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}22bfee9ab4c6952d3e6f77b9c0835696'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{md5}8bcfb466d72544dd31a4f339243ed669'", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/instance_name_template]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/fping_path]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/driver]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/discover_hosts_in_cells_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/max_attempts]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_io_ops_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_instances_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_port]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/auth_schemes]/ensure: created", > "Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Purge_shadow_tables/Cron[nova-manage db purge]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{md5}264e3ffdf613ebdb1c9ac7158df0a8ab'", > "Notice: Applied catalog in 11.22 seconds", > " Total: 179", > " Success: 179", > " Changed: 179", > " Out of sync: 179", > " Total: 504", > " Skipped: 75", > " Cron: 0.02", > " Package: 0.09", > " Last run: 1538483800", > " Total: 16.17", > " Config retrieval: 5.88", > " Nova config: 9.84", > "Gathering files modified after 2018-10-02 12:36:16.848051594 +0000", > "2018-10-02 12:36:43,459 DEBUG: 28746 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova.origin_of_time", > "+ touch /var/lib/config-data/nova.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 97]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 561]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 97]", > "Warning: Scope(Class[Nova::Api]): Running nova metadata api via evenlet is deprecated and will be removed in Stein release.", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/nova/manifests/scheduler/filter.pp\", 150]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/scheduler.pp\", 32]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova", > "++ stat -c %y /var/lib/config-data/nova.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:16.848051594 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/nova", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/nova --mtime=1970-01-01", > "2018-10-02 12:36:43,459 INFO: 28746 -- Removing container: docker-puppet-nova", > "2018-10-02 12:36:43,505 DEBUG: 28746 -- docker-puppet-nova", > "2018-10-02 12:36:43,505 INFO: 28746 -- Finished processing puppet configs for nova", > "2018-10-02 12:36:43,506 INFO: 28746 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:36:43,506 DEBUG: 28746 -- config_volume iscsid", > "2018-10-02 12:36:43,506 DEBUG: 28746 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-10-02 12:36:43,506 DEBUG: 28746 -- manifest include ::tripleo::profile::base::iscsid", > "2018-10-02 12:36:43,506 DEBUG: 28746 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:36:43,506 DEBUG: 28746 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-10-02 12:36:43,506 DEBUG: 28746 -- check_mode 0", > "2018-10-02 12:36:43,507 INFO: 28746 -- Removing container: docker-puppet-iscsid", > "2018-10-02 12:36:43,573 INFO: 28746 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:36:44,213 DEBUG: 28746 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "2afcd4790b43: Pulling fs layer", > "2afcd4790b43: Verifying Checksum", > "2afcd4790b43: Download complete", > "2afcd4790b43: Pull complete", > "Digest: sha256:b516e920a95255994d6493d4a922af867754e570e2afe8afeaa5c2f3e25a6d94", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:36:44,216 DEBUG: 28746 -- NET_HOST enabled", > "2018-10-02 12:36:44,216 DEBUG: 28746 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpxO_CPo:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 12:36:44,746 DEBUG: 28747 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-panko-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-panko-api", > "8eabf556166e: Pulling fs layer", > "884a4a0b0967: Pulling fs layer", > "884a4a0b0967: Verifying Checksum", > "884a4a0b0967: Download complete", > "8eabf556166e: Verifying Checksum", > "8eabf556166e: Download complete", > "8eabf556166e: Pull complete", > "884a4a0b0967: Pull complete", > "Digest: sha256:7bfddde03ab9169a2eb08c712adc74c27bb8971d4823a46dbb41e3525c2f000b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 12:36:44,750 DEBUG: 28747 -- NET_HOST enabled", > "2018-10-02 12:36:44,750 DEBUG: 28747 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-panko --env PUPPET_TAGS=file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config --env NAME=panko --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpkyD1P1:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 12:36:45,333 DEBUG: 28745 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.21 seconds", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created", > "Notice: Applied catalog in 2.04 seconds", > " Total: 48", > " Success: 48", > " Skipped: 21", > " Total: 223", > " Out of sync: 48", > " Changed: 48", > " Heat config: 1.62", > " Last run: 1538483803", > " Config retrieval: 2.43", > " Total: 4.13", > " Config: 1538483799", > "Gathering files modified after 2018-10-02 12:36:33.033125301 +0000", > "2018-10-02 12:36:45,333 DEBUG: 28745 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat.origin_of_time", > "+ touch /var/lib/config-data/heat.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat", > "++ stat -c %y /var/lib/config-data/heat.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:33.033125301 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/heat", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/heat --mtime=1970-01-01", > "2018-10-02 12:36:45,334 INFO: 28745 -- Removing container: docker-puppet-heat", > "2018-10-02 12:36:45,369 DEBUG: 28745 -- docker-puppet-heat", > "2018-10-02 12:36:45,369 INFO: 28745 -- Finished processing puppet configs for heat", > "2018-10-02 12:36:45,370 INFO: 28745 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 12:36:45,370 DEBUG: 28745 -- config_volume cinder", > "2018-10-02 12:36:45,370 DEBUG: 28745 -- puppet_tags file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line", > "2018-10-02 12:36:45,370 DEBUG: 28745 -- manifest include ::tripleo::profile::base::cinder::api", > "include ::tripleo::profile::base::cinder::backup::ceph", > "include ::tripleo::profile::base::cinder::scheduler", > "include ::tripleo::profile::base::lvm", > "2018-10-02 12:36:45,370 DEBUG: 28745 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 12:36:45,370 DEBUG: 28745 -- volumes []", > "2018-10-02 12:36:45,370 DEBUG: 28745 -- check_mode 0", > "2018-10-02 12:36:45,371 INFO: 28745 -- Removing container: docker-puppet-cinder", > "2018-10-02 12:36:45,434 INFO: 28745 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 12:36:52,382 DEBUG: 28746 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.52 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > " Total: 2", > " Success: 2", > " Total: 10", > " Out of sync: 2", > " Changed: 2", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.58", > " Total: 0.60", > " Last run: 1538483811", > " Config: 1538483810", > "Gathering files modified after 2018-10-02 12:36:44.415174571 +0000", > "2018-10-02 12:36:52,382 DEBUG: 28746 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:44.415174571 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/iscsid", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-10-02 12:36:52,382 INFO: 28746 -- Removing container: docker-puppet-iscsid", > "2018-10-02 12:36:52,417 DEBUG: 28746 -- docker-puppet-iscsid", > "2018-10-02 12:36:52,417 INFO: 28746 -- Finished processing puppet configs for iscsid", > "2018-10-02 12:36:52,417 INFO: 28746 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 12:36:52,418 DEBUG: 28746 -- config_volume glance_api", > "2018-10-02 12:36:52,418 DEBUG: 28746 -- puppet_tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-10-02 12:36:52,418 DEBUG: 28746 -- manifest include ::tripleo::profile::base::glance::api", > "2018-10-02 12:36:52,418 DEBUG: 28746 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 12:36:52,418 DEBUG: 28746 -- volumes []", > "2018-10-02 12:36:52,418 DEBUG: 28746 -- check_mode 0", > "2018-10-02 12:36:52,418 INFO: 28746 -- Removing container: docker-puppet-glance_api", > "2018-10-02 12:36:52,503 INFO: 28746 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 12:36:54,120 DEBUG: 28745 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-api", > "58cfa97883f0: Pulling fs layer", > "ddff537686ab: Pulling fs layer", > "ddff537686ab: Verifying Checksum", > "ddff537686ab: Download complete", > "58cfa97883f0: Verifying Checksum", > "58cfa97883f0: Download complete", > "58cfa97883f0: Pull complete", > "ddff537686ab: Pull complete", > "Digest: sha256:ad06296168f9f7818d054cba160af0406be642f4622b2b267bf10e014843aa37", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 12:36:54,123 DEBUG: 28745 -- NET_HOST enabled", > "2018-10-02 12:36:54,124 DEBUG: 28745 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-cinder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line --env NAME=cinder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpdrZSoK:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 12:36:58,633 DEBUG: 28747 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.15 seconds", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/host]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/port]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/workers]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_api_paste_ini[pipeline:main/pipeline]/ensure: created", > "Notice: /Stage[main]/Panko::Expirer/Cron[panko-expirer]/ensure: created", > "Notice: /Stage[main]/Panko::Logging/Oslo::Log[panko_config]/Panko_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Panko::Db/Oslo::Db[panko_config]/Panko_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Panko::Policy/Oslo::Policy[panko_config]/Panko_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Oslo::Middleware[panko_config]/Panko_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}3a76a691c81837a613bc3ff35e544ad7'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[/var/www/cgi-bin/panko]/ensure: created", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[panko_wsgi]/ensure: defined content as '{md5}e6f446b6267321fd2251a3e83021181a'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/Apache::Vhost[panko_wsgi]/Concat[10-panko_wsgi.conf]/File[/etc/httpd/conf.d/10-panko_wsgi.conf]/ensure: defined content as '{md5}2ebd654e2de095b6cbeb99758a1aa85a'", > "Notice: Applied catalog in 1.14 seconds", > " Total: 101", > " Success: 101", > " Changed: 101", > " Out of sync: 101", > " Total: 256", > " Panko api paste ini: 0.00", > " Panko config: 0.22", > " File: 0.36", > " Last run: 1538483817", > " Config retrieval: 4.68", > " Total: 5.35", > " Config: 1538483811", > "Gathering files modified after 2018-10-02 12:36:44.964176861 +0000", > "2018-10-02 12:36:58,633 DEBUG: 28747 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config'", > "+ origin_of_time=/var/lib/config-data/panko.origin_of_time", > "+ touch /var/lib/config-data/panko.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko.pp\", 32]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/db.pp\", 59]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko/api.pp\", 83]", > "Warning: Scope(Class[Panko::Api]): This Class is deprecated and will be removed in future releases.", > "Warning: Scope(Class[Panko::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/panko", > "++ stat -c %y /var/lib/config-data/panko.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:44.964176861 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/panko", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/panko", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/panko.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/panko", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/panko --mtime=1970-01-01", > "2018-10-02 12:36:58,634 INFO: 28747 -- Removing container: docker-puppet-panko", > "2018-10-02 12:36:58,677 DEBUG: 28747 -- docker-puppet-panko", > "2018-10-02 12:36:58,678 INFO: 28747 -- Finished processing puppet configs for panko", > "2018-10-02 12:36:58,678 INFO: 28747 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:36:58,678 DEBUG: 28747 -- config_volume crond", > "2018-10-02 12:36:58,678 DEBUG: 28747 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-10-02 12:36:58,678 DEBUG: 28747 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 12:36:58,678 DEBUG: 28747 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:36:58,678 DEBUG: 28747 -- volumes []", > "2018-10-02 12:36:58,678 DEBUG: 28747 -- check_mode 0", > "2018-10-02 12:36:58,680 INFO: 28747 -- Removing container: docker-puppet-crond", > "2018-10-02 12:36:58,745 INFO: 28747 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:36:59,124 DEBUG: 28746 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-glance-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-glance-api", > "07f9f19afd91: Pulling fs layer", > "0a5772a6be1c: Pulling fs layer", > "0a5772a6be1c: Verifying Checksum", > "0a5772a6be1c: Download complete", > "07f9f19afd91: Verifying Checksum", > "07f9f19afd91: Download complete", > "07f9f19afd91: Pull complete", > "0a5772a6be1c: Pull complete", > "Digest: sha256:69eb9af199d6572ba1406843685ec68dab3eeb943513ce161d1fb81714f2fc6a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 12:36:59,128 DEBUG: 28746 -- NET_HOST enabled", > "2018-10-02 12:36:59,128 DEBUG: 28746 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-glance_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config --env NAME=glance_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpScYuuN:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 12:36:59,257 DEBUG: 28747 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "4d80de3c75a6: Pulling fs layer", > "4d80de3c75a6: Verifying Checksum", > "4d80de3c75a6: Download complete", > "4d80de3c75a6: Pull complete", > "Digest: sha256:d7abfe49c737904a24b4da901cd357c6a9aba94959e6be50bdb830a6a32fec7b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:36:59,260 DEBUG: 28747 -- NET_HOST enabled", > "2018-10-02 12:36:59,260 DEBUG: 28747 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmplaRyYS:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 12:37:06,854 DEBUG: 28747 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.47 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}f121ac457cb6e71964450c8cbc0a2431'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > " Skipped: 7", > " Total: 9", > " Config retrieval: 0.57", > " Total: 0.58", > " Last run: 1538483826", > " Config: 1538483825", > "Gathering files modified after 2018-10-02 12:36:59.688237378 +0000", > "2018-10-02 12:37:06,854 DEBUG: 28747 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:59.688237378 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/crond", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-10-02 12:37:06,854 INFO: 28747 -- Removing container: docker-puppet-crond", > "2018-10-02 12:37:06,893 DEBUG: 28747 -- docker-puppet-crond", > "2018-10-02 12:37:06,894 INFO: 28747 -- Finished processing puppet configs for crond", > "2018-10-02 12:37:06,894 INFO: 28747 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 12:37:06,894 DEBUG: 28747 -- config_volume haproxy", > "2018-10-02 12:37:06,894 DEBUG: 28747 -- puppet_tags file,file_line,concat,augeas,cron,haproxy_config", > "2018-10-02 12:37:06,894 DEBUG: 28747 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "2018-10-02 12:37:06,894 DEBUG: 28747 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 12:37:06,894 DEBUG: 28747 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-10-02 12:37:06,894 DEBUG: 28747 -- check_mode 0", > "2018-10-02 12:37:06,896 INFO: 28747 -- Removing container: docker-puppet-haproxy", > "2018-10-02 12:37:06,961 INFO: 28747 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 12:37:10,881 DEBUG: 28747 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-haproxy ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-haproxy", > "21ef70eb8347: Pulling fs layer", > "21ef70eb8347: Download complete", > "21ef70eb8347: Pull complete", > "Digest: sha256:02d95b40692b62a39f6c507d29db6c493db41ee4905a1c4d7aefbd1b0324cea9", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 12:37:10,885 DEBUG: 28747 -- NET_HOST enabled", > "2018-10-02 12:37:10,885 DEBUG: 28747 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-haproxy --env PUPPET_TAGS=file,file_line,concat,augeas,cron,haproxy_config --env NAME=haproxy --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpUCRCbR:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/ipa/ca.crt:/etc/ipa/ca.crt:ro --volume /etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro --volume /etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro --volume /etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 12:37:12,550 DEBUG: 28745 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.90 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Lvm/Augeas[udev options in lvm.conf]/returns: executed successfully", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}4ffb743ff24a9aca4af793a04c09e38d'", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created", > "Notice: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/default_volume_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_user]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_chunk_size]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_pool]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_unit]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_count]/ensure: created", > "Notice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Policy/Oslo::Policy[cinder_config]/Cinder_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Oslo::Middleware[cinder_config]/Cinder_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[cinder_wsgi]/ensure: defined content as '{md5}870efbe437d63cd260287cd36472d7b1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/Apache::Vhost[cinder_wsgi]/Concat[10-cinder_wsgi.conf]/File[/etc/httpd/conf.d/10-cinder_wsgi.conf]/ensure: defined content as '{md5}b5503b93dcd20635cb2b9e9fa05e71ab'", > "Notice: Applied catalog in 5.15 seconds", > " Total: 133", > " Success: 133", > " Changed: 133", > " Out of sync: 133", > " Skipped: 37", > " Total: 370", > " File line: 0.00", > " Augeas: 0.65", > " Last run: 1538483830", > " Cinder config: 3.52", > " Config retrieval: 4.48", > " Total: 9.01", > " Config: 1538483821", > "Gathering files modified after 2018-10-02 12:36:54.327215650 +0000", > "2018-10-02 12:37:12,550 DEBUG: 28745 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/cinder.origin_of_time", > "+ touch /var/lib/config-data/cinder.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/api.pp\", 203]:[\"/etc/config.pp\", 2]", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_admin_info parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_info parameter is deprecated, has no effect and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/cinder", > "++ stat -c %y /var/lib/config-data/cinder.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:54.327215650 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/cinder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/cinder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/cinder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/cinder", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/cinder --mtime=1970-01-01", > "2018-10-02 12:37:12,551 INFO: 28745 -- Removing container: docker-puppet-cinder", > "2018-10-02 12:37:12,607 DEBUG: 28745 -- docker-puppet-cinder", > "2018-10-02 12:37:12,607 INFO: 28745 -- Finished processing puppet configs for cinder", > "2018-10-02 12:37:12,607 INFO: 28745 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 12:37:12,607 DEBUG: 28745 -- config_volume swift", > "2018-10-02 12:37:12,607 DEBUG: 28745 -- puppet_tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-10-02 12:37:12,607 DEBUG: 28745 -- manifest include ::tripleo::profile::base::swift::proxy", > "include ::tripleo::profile::base::swift::storage", > "2018-10-02 12:37:12,607 DEBUG: 28745 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 12:37:12,607 DEBUG: 28745 -- volumes []", > "2018-10-02 12:37:12,608 DEBUG: 28745 -- check_mode 0", > "2018-10-02 12:37:12,609 INFO: 28745 -- Removing container: docker-puppet-swift", > "2018-10-02 12:37:12,655 INFO: 28745 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 12:37:12,658 DEBUG: 28745 -- NET_HOST enabled", > "2018-10-02 12:37:12,658 DEBUG: 28745 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server --env NAME=swift --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpNXUOkf:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 12:37:12,671 DEBUG: 28746 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.53 seconds", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v1_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v2_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: created", > "Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 2.82 seconds", > " Total: 44", > " Success: 44", > " Total: 255", > " Out of sync: 44", > " Changed: 44", > " Skipped: 60", > " Glance cache config: 0.25", > " Last run: 1538483831", > " Glance api config: 2.12", > " Config retrieval: 2.89", > " Total: 5.33", > "Gathering files modified after 2018-10-02 12:36:59.326235911 +0000", > "2018-10-02 12:37:12,671 DEBUG: 28746 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config'", > "+ origin_of_time=/var/lib/config-data/glance_api.origin_of_time", > "+ touch /var/lib/config-data/glance_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/config.pp\", 48]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/glance/api.pp\", 198]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/api/db.pp\", 69]:[\"/etc/puppet/modules/glance/manifests/api.pp\", 371]", > "Warning: Unknown variable: 'default_store_real'. at /etc/puppet/modules/glance/manifests/api.pp:438:9", > "Warning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to http", > "Warning: Scope(Class[Glance::Api::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/glance_api", > "++ stat -c %y /var/lib/config-data/glance_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:36:59.326235911 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/glance_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/glance_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/glance_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/glance_api", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/glance_api --mtime=1970-01-01", > "2018-10-02 12:37:12,671 INFO: 28746 -- Removing container: docker-puppet-glance_api", > "2018-10-02 12:37:12,714 DEBUG: 28746 -- docker-puppet-glance_api", > "2018-10-02 12:37:12,714 INFO: 28746 -- Finished processing puppet configs for glance_api", > "2018-10-02 12:37:12,715 INFO: 28746 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 12:37:12,715 DEBUG: 28746 -- config_volume rabbitmq", > "2018-10-02 12:37:12,715 DEBUG: 28746 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-10-02 12:37:12,715 DEBUG: 28746 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "2018-10-02 12:37:12,715 DEBUG: 28746 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 12:37:12,715 DEBUG: 28746 -- volumes []", > "2018-10-02 12:37:12,715 DEBUG: 28746 -- check_mode 0", > "2018-10-02 12:37:12,716 INFO: 28746 -- Removing container: docker-puppet-rabbitmq", > "2018-10-02 12:37:12,791 INFO: 28746 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 12:37:17,418 DEBUG: 28746 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-rabbitmq ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-rabbitmq", > "7631898d5513: Pulling fs layer", > "7631898d5513: Verifying Checksum", > "7631898d5513: Download complete", > "7631898d5513: Pull complete", > "Digest: sha256:a77a6ab407a3f4020e73c1dc1548581abaeeacfdfb4c397b44d307beeedc98b4", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 12:37:17,421 DEBUG: 28746 -- NET_HOST enabled", > "2018-10-02 12:37:17,421 DEBUG: 28746 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-rabbitmq --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=rabbitmq --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpu29IY7:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 12:37:21,947 DEBUG: 28747 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.98 seconds", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}e632867547a31f39d12662dd19c6e877'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.42 seconds", > " Changed: 1", > " Out of sync: 1", > " Total: 76", > " File: 0.09", > " Last run: 1538483841", > " Config retrieval: 3.23", > " Total: 3.33", > " Config: 1538483837", > "Gathering files modified after 2018-10-02 12:37:11.082281752 +0000", > "2018-10-02 12:37:21,948 DEBUG: 28747 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/haproxy", > "++ stat -c %y /var/lib/config-data/haproxy.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:37:11.082281752 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/haproxy", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/haproxy", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/haproxy.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/haproxy", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/haproxy --mtime=1970-01-01", > "2018-10-02 12:37:21,948 INFO: 28747 -- Removing container: docker-puppet-haproxy", > "2018-10-02 12:37:21,984 DEBUG: 28747 -- docker-puppet-haproxy", > "2018-10-02 12:37:21,984 INFO: 28747 -- Finished processing puppet configs for haproxy", > "2018-10-02 12:37:21,985 INFO: 28747 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:37:21,985 DEBUG: 28747 -- config_volume ceilometer", > "2018-10-02 12:37:21,985 DEBUG: 28747 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config", > "2018-10-02 12:37:21,985 DEBUG: 28747 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-10-02 12:37:21,985 DEBUG: 28747 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:37:21,985 DEBUG: 28747 -- volumes []", > "2018-10-02 12:37:21,985 DEBUG: 28747 -- check_mode 0", > "2018-10-02 12:37:21,986 INFO: 28747 -- Removing container: docker-puppet-ceilometer", > "2018-10-02 12:37:22,056 INFO: 28747 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:37:23,800 DEBUG: 28745 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.08 seconds", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/api_class]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/username]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.20:11211'", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created", > "Notice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/rsync]/ensure: defined content as '{md5}c14e13804b8ff4966c14aba30e062ed3'", > "Notice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{md5}c63fccb45c0dcbbbe17d0f4bdba920ec' to '{md5}f9d04c98449c1125ddf91145a3730909'", > "Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed '%SWIFT_HASH_PATH_SUFFIX%' to 'VdgkIMr94WocvP5mcFjrca7al'", > "Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed '8' to 'auto'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed 'catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' to 'catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken s3api s3token keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes proxy-logging proxy-server'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.20:11211'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/operator_roles]/value: value changed 'admin, SwiftOperator' to 'admin, swiftoperator, ResellerAdmin'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/signing_dir]/value: value changed '/tmp/keystone-signing-swift' to '/var/cache/swift'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_plugin]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/url_base]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed 'false' to 'True'", > "Notice: /Stage[main]/Swift::Proxy::Container_quotas/Swift_proxy_config[filter:container_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Account_quotas/Swift_proxy_config[filter:account_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/disable_encryption]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/keymaster_config_path]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/auth_pipeline_check]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_uri]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node/d1]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/ensure: defined content as '{md5}03c212471330711f4437248f68a55b5e'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/ensure: defined content as '{md5}bb94f9e07436965689759ae52db539b9'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/ensure: defined content as '{md5}4efb3e4938cd4a1f8b820a4e8b7febee'", > "Notice: Applied catalog in 0.73 seconds", > " Total: 97", > " Success: 97", > " Total: 192", > " Out of sync: 97", > " Changed: 97", > " Swift config: 0.00", > " Swift keymaster config: 0.01", > " Swift object expirer config: 0.02", > " File: 0.04", > " Swift proxy config: 0.24", > " Last run: 1538483842", > " Config retrieval: 2.48", > " Total: 2.80", > " Config: 1538483839", > "Gathering files modified after 2018-10-02 12:37:12.864288550 +0000", > "2018-10-02 12:37:23,800 DEBUG: 28745 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server'", > "+ origin_of_time=/var/lib/config-data/swift.origin_of_time", > "+ touch /var/lib/config-data/swift.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 147]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 163]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 165]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > "Warning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56", > "Warning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56", > "Warning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56", > "Warning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56", > "Warning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release", > "Warning: Class 'xinetd' is already defined at /etc/config.pp:6; cannot redefine at /etc/puppet/modules/xinetd/manifests/init.pp:12", > "Warning: Unknown variable: 'xinetd::params::default_user'. at /etc/puppet/modules/xinetd/manifests/service.pp:110:14", > "Warning: Unknown variable: 'xinetd::params::default_group'. at /etc/puppet/modules/xinetd/manifests/service.pp:116:15", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:161:13", > "Warning: Unknown variable: 'xinetd::service_name'. at /etc/puppet/modules/xinetd/manifests/service.pp:166:24", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:167:21", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 189]:", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 203]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift", > "++ stat -c %y /var/lib/config-data/swift.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:37:12.864288550 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/swift", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/swift --mtime=1970-01-01", > "2018-10-02 12:37:23,801 INFO: 28745 -- Removing container: docker-puppet-swift", > "2018-10-02 12:37:23,837 DEBUG: 28745 -- docker-puppet-swift", > "2018-10-02 12:37:23,837 INFO: 28745 -- Finished processing puppet configs for swift", > "2018-10-02 12:37:23,837 INFO: 28745 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 12:37:23,837 DEBUG: 28745 -- config_volume heat_api_cfn", > "2018-10-02 12:37:23,837 DEBUG: 28745 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-10-02 12:37:23,837 DEBUG: 28745 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-10-02 12:37:23,838 DEBUG: 28745 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 12:37:23,838 DEBUG: 28745 -- volumes []", > "2018-10-02 12:37:23,838 DEBUG: 28745 -- check_mode 0", > "2018-10-02 12:37:23,839 INFO: 28745 -- Removing container: docker-puppet-heat_api_cfn", > "2018-10-02 12:37:23,918 INFO: 28745 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 12:37:24,349 DEBUG: 28747 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "5fcda0d83a5e: Pulling fs layer", > "2142eca15b92: Pulling fs layer", > "2142eca15b92: Download complete", > "5fcda0d83a5e: Verifying Checksum", > "5fcda0d83a5e: Download complete", > "5fcda0d83a5e: Pull complete", > "2142eca15b92: Pull complete", > "Digest: sha256:ba6a24fd5b438c2530cbd903d1b4616e6075f146618be39391273ae43949bbad", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:37:24,352 DEBUG: 28747 -- NET_HOST enabled", > "2018-10-02 12:37:24,352 DEBUG: 28747 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpkN5xSl:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 12:37:24,519 DEBUG: 28745 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn", > "d1bf34aac9d8: Already exists", > "814880d697ca: Pulling fs layer", > "814880d697ca: Verifying Checksum", > "814880d697ca: Download complete", > "814880d697ca: Pull complete", > "Digest: sha256:83df23b0a5e5290012456aa81f05f4c3df8b4dea4e0e6a53f8392ca4cd9f0067", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 12:37:24,523 DEBUG: 28745 -- NET_HOST enabled", > "2018-10-02 12:37:24,523 DEBUG: 28745 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api_cfn --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api_cfn --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp3x0Jgw:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 12:37:30,985 DEBUG: 28746 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.88 seconds", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}279e42511ea04897e294829a576d05d5'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}fe360f3aa9a3f3f3b4a3e450796bb7c1'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > " Total: 12", > " Success: 12", > " Total: 19", > " Out of sync: 9", > " Changed: 9", > " Config retrieval: 1.01", > " Total: 1.05", > " Last run: 1538483850", > " Config: 1538483849", > "Gathering files modified after 2018-10-02 12:37:17.616306485 +0000", > "2018-10-02 12:37:30,986 DEBUG: 28746 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/rabbitmq.origin_of_time", > "+ touch /var/lib/config-data/rabbitmq.origin_of_time", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/rabbitmq", > "++ stat -c %y /var/lib/config-data/rabbitmq.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:37:17.616306485 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/rabbitmq", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/rabbitmq", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/rabbitmq.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/rabbitmq", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/rabbitmq --mtime=1970-01-01", > "2018-10-02 12:37:30,986 INFO: 28746 -- Removing container: docker-puppet-rabbitmq", > "2018-10-02 12:37:31,038 DEBUG: 28746 -- docker-puppet-rabbitmq", > "2018-10-02 12:37:31,039 INFO: 28746 -- Finished processing puppet configs for rabbitmq", > "2018-10-02 12:37:31,039 INFO: 28746 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:37:31,039 DEBUG: 28746 -- config_volume neutron", > "2018-10-02 12:37:31,039 DEBUG: 28746 -- puppet_tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-10-02 12:37:31,039 DEBUG: 28746 -- manifest include tripleo::profile::base::neutron::server", > "include ::tripleo::profile::base::neutron::plugins::ml2", > "include tripleo::profile::base::neutron::dhcp", > "include tripleo::profile::base::neutron::l3", > "include tripleo::profile::base::neutron::metadata", > "include ::tripleo::profile::base::neutron::ovs", > "2018-10-02 12:37:31,039 DEBUG: 28746 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:37:31,039 DEBUG: 28746 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-10-02 12:37:31,039 DEBUG: 28746 -- check_mode 0", > "2018-10-02 12:37:31,041 INFO: 28746 -- Removing container: docker-puppet-neutron", > "2018-10-02 12:37:31,115 INFO: 28746 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:37:34,293 DEBUG: 28747 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.41 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/File[event_pipeline]/ensure: defined content as '{md5}e1b13cf3e430a5cacf9cd8ad4704c3b5'", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.71 seconds", > " Total: 26", > " Success: 26", > " Total: 156", > " Out of sync: 26", > " Changed: 26", > " Skipped: 35", > " Ceilometer config: 0.55", > " Config retrieval: 1.68", > " Last run: 1538483853", > " Total: 2.23", > " Config: 1538483850", > "Gathering files modified after 2018-10-02 12:37:24.565332097 +0000", > "2018-10-02 12:37:34,293 DEBUG: 28747 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > "Warning: Scope(Class[Ceilometer::Dispatcher::Gnocchi]): The class ceilometer::dispatcher::gnocchi is deprecated. All its", > " options must be set as url parameters in", > " ceilometer::agent::notification::pipeline_publishers. Depending of the used", > " Gnocchi version their might be ignored.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/agent/notification.pp\", 118]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer/agent/notification.pp\", 34]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:37:24.565332097 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/ceilometer", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-10-02 12:37:34,294 INFO: 28747 -- Removing container: docker-puppet-ceilometer", > "2018-10-02 12:37:34,329 DEBUG: 28747 -- docker-puppet-ceilometer", > "2018-10-02 12:37:34,329 INFO: 28747 -- Finished processing puppet configs for ceilometer", > "2018-10-02 12:37:37,095 DEBUG: 28746 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight", > "f3c66d22e08b: Pulling fs layer", > "6cca3e1c80e1: Pulling fs layer", > "d405f46408bf: Pulling fs layer", > "d405f46408bf: Verifying Checksum", > "d405f46408bf: Download complete", > "6cca3e1c80e1: Verifying Checksum", > "6cca3e1c80e1: Download complete", > "f3c66d22e08b: Verifying Checksum", > "f3c66d22e08b: Download complete", > "f3c66d22e08b: Pull complete", > "6cca3e1c80e1: Pull complete", > "d405f46408bf: Pull complete", > "Digest: sha256:0c7ace86b7c08a5ec94dbf283b5a7a95f0678caf8c830185bcfc7a5dbaec5704", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:37:37,099 DEBUG: 28746 -- NET_HOST enabled", > "2018-10-02 12:37:37,099 DEBUG: 28746 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp7v4TYX:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 12:37:40,154 DEBUG: 28745 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.54 seconds", > "Notice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}67f86c5d4b98738de34c1b40fd6e151a'", > "Notice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{md5}96094c96352002c43ada5bdf8650ff38'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{md5}c3ae61ab87649c8cdfab8977da2b194b'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{md5}404887a99e4c99432ce147e143a87a67'", > "Notice: Applied catalog in 2.60 seconds", > " Total: 122", > " Success: 122", > " Changed: 122", > " Out of sync: 122", > " Total: 338", > " Last run: 1538483858", > " Config retrieval: 5.07", > " Total: 7.04", > " Config: 1538483851", > "Gathering files modified after 2018-10-02 12:37:24.983333592 +0000", > "2018-10-02 12:37:40,155 DEBUG: 28745 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat_api_cfn.origin_of_time", > "+ touch /var/lib/config-data/heat_api_cfn.origin_of_time", > " with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp\", 125]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api_cfn", > "++ stat -c %y /var/lib/config-data/heat_api_cfn.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:37:24.983333592 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api_cfn", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api_cfn", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api_cfn.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/heat_api_cfn", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/heat_api_cfn --mtime=1970-01-01", > "2018-10-02 12:37:40,155 INFO: 28745 -- Removing container: docker-puppet-heat_api_cfn", > "2018-10-02 12:37:40,204 DEBUG: 28745 -- docker-puppet-heat_api_cfn", > "2018-10-02 12:37:40,204 INFO: 28745 -- Finished processing puppet configs for heat_api_cfn", > "2018-10-02 12:37:51,302 DEBUG: 28746 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.87 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_local_resolv]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 1.81 seconds", > " Total: 105", > " Success: 105", > " Changed: 105", > " Out of sync: 105", > " Total: 358", > " Skipped: 44", > " Neutron api config: 0.00", > " Neutron agent ovs: 0.01", > " Neutron l3 agent config: 0.02", > " Neutron metadata agent config: 0.02", > " Neutron plugin ml2: 0.03", > " Neutron dhcp agent config: 0.10", > " Neutron config: 1.32", > " Last run: 1538483869", > " Config retrieval: 4.32", > " Total: 5.88", > " Config: 1538483863", > "Gathering files modified after 2018-10-02 12:37:37.296377115 +0000", > "2018-10-02 12:37:51,302 DEBUG: 28746 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 492]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/server.pp\", 104]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 136]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/db.pp\", 69]:[\"/etc/puppet/modules/neutron/manifests/server.pp\", 290]", > "Warning: Scope(Class[Neutron::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: '::neutron::params::metadata_agent_package'. at /etc/puppet/modules/neutron/manifests/agents/metadata.pp:122:6", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:37:37.296377115 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/neutron", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-10-02 12:37:51,302 INFO: 28746 -- Removing container: docker-puppet-neutron", > "2018-10-02 12:37:51,351 DEBUG: 28746 -- docker-puppet-neutron", > "2018-10-02 12:37:51,351 INFO: 28746 -- Finished processing puppet configs for neutron", > "2018-10-02 12:37:51,351 INFO: 28746 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 12:37:51,351 DEBUG: 28746 -- config_volume horizon", > "2018-10-02 12:37:51,351 DEBUG: 28746 -- puppet_tags file,file_line,concat,augeas,cron,horizon_config", > "2018-10-02 12:37:51,352 DEBUG: 28746 -- manifest include ::tripleo::profile::base::horizon", > "2018-10-02 12:37:51,352 DEBUG: 28746 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 12:37:51,352 DEBUG: 28746 -- volumes []", > "2018-10-02 12:37:51,352 DEBUG: 28746 -- check_mode 0", > "2018-10-02 12:37:51,352 INFO: 28746 -- Removing container: docker-puppet-horizon", > "2018-10-02 12:37:51,418 INFO: 28746 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 12:37:57,098 DEBUG: 28746 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-horizon ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-horizon", > "e2ca3343265c: Pulling fs layer", > "e2ca3343265c: Download complete", > "e2ca3343265c: Pull complete", > "Digest: sha256:fc09d11276f0250ec232eada31a7417337bdad0257605eb44ff4afc1692e17b5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 12:37:57,102 DEBUG: 28746 -- NET_HOST enabled", > "2018-10-02 12:37:57,102 DEBUG: 28746 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-horizon --env PUPPET_TAGS=file,file_line,concat,augeas,cron,horizon_config --env NAME=horizon --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpSSD5KP:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 12:38:08,691 DEBUG: 28746 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.73 seconds", > "Notice: /Stage[main]/Apache::Mod::Remoteip/File[remoteip.conf]/ensure: defined content as '{md5}aa5f53c440240404baf02eecc50a15bb'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}4d221af5629a83b7066fa0b5f6a2745e'", > "Notice: /Stage[main]/Apache::Mod::Remoteip/Apache::Mod[remoteip]/File[remoteip.load]/ensure: defined content as '{md5}118eb7518a1d018a162d23dfe32c4bad'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{md5}8e891bc57ee752f792938ffd379bd3c7' to '{md5}2ba3db3dc1005f16a58533e9ddabfbfb'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/owner: owner changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: group changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[10-horizon_vhost.conf]/File[/etc/httpd/conf.d/10-horizon_vhost.conf]/ensure: defined content as '{md5}f8e027b91e63adb42b0e33876f695251'", > "Notice: Applied catalog in 0.77 seconds", > " Total: 86", > " Success: 86", > " Total: 172", > " Out of sync: 84", > " Changed: 84", > " Last run: 1538483887", > " Config retrieval: 3.14", > " Total: 3.40", > " Config: 1538483883", > "Gathering files modified after 2018-10-02 12:37:57.295443679 +0000", > "2018-10-02 12:38:08,692 DEBUG: 28746 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,horizon_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,horizon_config'", > "+ origin_of_time=/var/lib/config-data/horizon.origin_of_time", > "+ touch /var/lib/config-data/horizon.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,horizon_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/horizon.pp\", 97]:[\"/etc/config.pp\", 2]", > "Warning: ModuleLoader: module 'horizon' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Undefined variable ''; ", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 604]:[\"/etc/config.pp\", 2]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 605]:[\"/etc/config.pp\", 2]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 607]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/horizon", > "++ stat -c %y /var/lib/config-data/horizon.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 12:37:57.295443679 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/horizon", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/horizon", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/horizon.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/horizon", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/horizon --mtime=1970-01-01", > "2018-10-02 12:38:08,692 INFO: 28746 -- Removing container: docker-puppet-horizon", > "2018-10-02 12:38:08,741 DEBUG: 28746 -- docker-puppet-horizon", > "2018-10-02 12:38:08,741 INFO: 28746 -- Finished processing puppet configs for horizon", > "2018-10-02 12:38:08,742 DEBUG: 28744 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-10-02 12:38:08,743 DEBUG: 28744 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-10-02 12:38:08,745 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-10-02 12:38:08,746 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-10-02 12:38:08,746 DEBUG: 28744 -- Updating config hash for mysql_bootstrap, config_volume=heat_api_cfn hash=52aa556de8de249505101520a9fc9702", > "2018-10-02 12:38:08,746 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-10-02 12:38:08,746 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-10-02 12:38:08,746 DEBUG: 28744 -- Updating config hash for rabbitmq_bootstrap, config_volume=heat_api_cfn hash=06ab927997717e8acd2b701612e4b3f3", > "2018-10-02 12:38:08,746 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-10-02 12:38:08,749 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-10-02 12:38:08,749 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-10-02 12:38:08,749 DEBUG: 28744 -- Updating config hash for clustercheck, config_volume=heat_api_cfn hash=9c1cc2812133cb6fd02affefce24908f", > "2018-10-02 12:38:08,749 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-10-02 12:38:08,749 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-10-02 12:38:08,749 DEBUG: 28744 -- Updating config hash for mysql_restart_bundle, config_volume=heat_api_cfn hash=52aa556de8de249505101520a9fc9702", > "2018-10-02 12:38:08,749 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-10-02 12:38:08,749 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-10-02 12:38:08,749 DEBUG: 28744 -- Updating config hash for haproxy_restart_bundle, config_volume=heat_api_cfn hash=e632867547a31f39d12662dd19c6e877", > "2018-10-02 12:38:08,749 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-10-02 12:38:08,750 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-10-02 12:38:08,750 DEBUG: 28744 -- Updating config hash for rabbitmq_restart_bundle, config_volume=heat_api_cfn hash=06ab927997717e8acd2b701612e4b3f3", > "2018-10-02 12:38:08,750 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon/etc", > "2018-10-02 12:38:08,750 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-10-02 12:38:08,750 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-10-02 12:38:08,750 DEBUG: 28744 -- Updating config hash for redis_restart_bundle, config_volume=heat_api_cfn hash=609472194071845f36d9e18d98d7647a", > "2018-10-02 12:38:08,752 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-10-02 12:38:08,752 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-10-02 12:38:08,752 DEBUG: 28744 -- Updating config hash for nova_placement, config_volume=heat_api_cfn hash=32cb7dd09e3699ead04b04d57329fbfb", > "2018-10-02 12:38:08,752 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,752 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,752 DEBUG: 28744 -- Updating config hash for swift_rsync_fix, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,753 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-10-02 12:38:08,753 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-10-02 12:38:08,753 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/heat/etc/heat.md5sum for config_volume /var/lib/config-data/heat/etc/heat", > "2018-10-02 12:38:08,753 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/heat/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/heat/etc/my.cnf.d", > "2018-10-02 12:38:08,753 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data.md5sum for config_volume /var/lib/config-data", > "2018-10-02 12:38:08,753 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/swift/etc", > "2018-10-02 12:38:08,753 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-10-02 12:38:08,753 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-10-02 12:38:08,753 DEBUG: 28744 -- Updating config hash for keystone_cron, config_volume=heat_api_cfn hash=f9ddfea989e5ef4e837754ce95b8ec8c", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/panko/etc.md5sum for config_volume /var/lib/config-data/panko/etc", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/panko/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/panko/etc/my.cnf.d", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Updating config hash for keystone_db_sync, config_volume=heat_api_cfn hash=f9ddfea989e5ef4e837754ce95b8ec8c", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Updating config hash for keystone, config_volume=heat_api_cfn hash=f9ddfea989e5ef4e837754ce95b8ec8c", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/aodh/etc/aodh.md5sum for config_volume /var/lib/config-data/aodh/etc/aodh", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/aodh/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/aodh/etc/my.cnf.d", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,754 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,755 DEBUG: 28744 -- Updating config hash for neutron_ovs_bridge, config_volume=heat_api_cfn hash=be920786e43d9d72b682ca1d9a274b7c", > "2018-10-02 12:38:08,755 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-10-02 12:38:08,755 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-10-02 12:38:08,755 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-10-02 12:38:08,755 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-10-02 12:38:08,755 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-10-02 12:38:08,755 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-10-02 12:38:08,756 DEBUG: 28744 -- Updating config hash for glance_api_db_sync, config_volume=heat_api_cfn hash=cab39a3ac8c5f40430cc207a9d99f81d", > "2018-10-02 12:38:08,756 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/neutron/etc.md5sum for config_volume /var/lib/config-data/neutron/etc", > "2018-10-02 12:38:08,756 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/neutron/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/neutron/etc/my.cnf.d", > "2018-10-02 12:38:08,756 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/neutron/usr/share.md5sum for config_volume /var/lib/config-data/neutron/usr/share", > "2018-10-02 12:38:08,756 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/sahara/etc/sahara.md5sum for config_volume /var/lib/config-data/sahara/etc/sahara", > "2018-10-02 12:38:08,756 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-10-02 12:38:08,756 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-10-02 12:38:08,756 DEBUG: 28744 -- Updating config hash for horizon, config_volume=heat_api_cfn hash=fa086b1543c9b0fd07159794ac1ef96f", > "2018-10-02 12:38:08,759 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 12:38:08,759 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 12:38:08,759 DEBUG: 28744 -- Updating config hash for aodh_evaluator, config_volume=heat_api_cfn hash=89e02684931d1c3e1b82a07465f122c1", > "2018-10-02 12:38:08,759 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,759 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,759 DEBUG: 28744 -- Updating config hash for swift_container_updater, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,759 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 12:38:08,759 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 12:38:08,759 DEBUG: 28744 -- Updating config hash for nova_scheduler, config_volume=heat_api_cfn hash=022bf1480d23ccb42b0c87bb2cc84137", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Updating config hash for swift_object_server, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Updating config hash for cinder_api, config_volume=heat_api_cfn hash=64cea007f97ad23d4a3ac3fdb333b631", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Updating config hash for swift_proxy, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Updating config hash for neutron_dhcp, config_volume=heat_api_cfn hash=be920786e43d9d72b682ca1d9a274b7c", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-10-02 12:38:08,760 DEBUG: 28744 -- Updating config hash for heat_api, config_volume=heat_api_cfn hash=b64a49401c53853170ca2ae37c1f4fa6", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Updating config hash for swift_object_auditor, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Updating config hash for neutron_metadata_agent, config_volume=heat_api_cfn hash=be920786e43d9d72b682ca1d9a274b7c", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Updating config hash for ceilometer_agent_central, config_volume=heat_api_cfn hash=ba43fc4e189db8ef2dff3805494234d0", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Updating config hash for swift_account_replicator, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 12:38:08,761 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Updating config hash for aodh_notifier, config_volume=heat_api_cfn hash=89e02684931d1c3e1b82a07465f122c1", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Updating config hash for swift_container_server, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Updating config hash for nova_api_cron, config_volume=heat_api_cfn hash=022bf1480d23ccb42b0c87bb2cc84137", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Updating config hash for nova_consoleauth, config_volume=heat_api_cfn hash=022bf1480d23ccb42b0c87bb2cc84137", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-10-02 12:38:08,762 DEBUG: 28744 -- Updating config hash for glance_api, config_volume=heat_api_cfn hash=cab39a3ac8c5f40430cc207a9d99f81d", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Updating config hash for swift_account_reaper, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Updating config hash for ceilometer_agent_notification, config_volume=heat_api_cfn hash=ba43fc4e189db8ef2dff3805494234d0-426ac20c50a9ca660b40446eeb192c50", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Updating config hash for nova_vnc_proxy, config_volume=heat_api_cfn hash=022bf1480d23ccb42b0c87bb2cc84137", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,763 DEBUG: 28744 -- Updating config hash for swift_rsync, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Updating config hash for nova_api, config_volume=heat_api_cfn hash=022bf1480d23ccb42b0c87bb2cc84137", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Updating config hash for aodh_api, config_volume=heat_api_cfn hash=89e02684931d1c3e1b82a07465f122c1", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Updating config hash for nova_metadata, config_volume=heat_api_cfn hash=022bf1480d23ccb42b0c87bb2cc84137", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Updating config hash for heat_engine, config_volume=heat_api_cfn hash=4f25de88d56a8bc7af66911b813e446f", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Updating config hash for heat_api_cron, config_volume=heat_api_cfn hash=b64a49401c53853170ca2ae37c1f4fa6", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,764 DEBUG: 28744 -- Updating config hash for swift_object_replicator, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Updating config hash for neutron_l3_agent, config_volume=heat_api_cfn hash=be920786e43d9d72b682ca1d9a274b7c", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Updating config hash for cinder_scheduler, config_volume=heat_api_cfn hash=64cea007f97ad23d4a3ac3fdb333b631", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Updating config hash for nova_conductor, config_volume=heat_api_cfn hash=022bf1480d23ccb42b0c87bb2cc84137", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Updating config hash for swift_account_server, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-10-02 12:38:08,765 DEBUG: 28744 -- Updating config hash for sahara_api, config_volume=heat_api_cfn hash=5fcd13de261f0dc53e831ea479a7d8b5", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Updating config hash for sahara_engine, config_volume=heat_api_cfn hash=5fcd13de261f0dc53e831ea479a7d8b5", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Updating config hash for logrotate_crond, config_volume=heat_api_cfn hash=6f2a5e23a896d70ebbc2c66d87cd9266", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Updating config hash for neutron_ovs_agent, config_volume=heat_api_cfn hash=be920786e43d9d72b682ca1d9a274b7c", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,766 DEBUG: 28744 -- Updating config hash for swift_account_auditor, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Updating config hash for swift_container_replicator, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Updating config hash for swift_object_updater, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Updating config hash for swift_object_expirer, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Updating config hash for swift_container_auditor, config_volume=heat_api_cfn hash=de9f685af53b9e32a72f556cf248c5b3", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Updating config hash for panko_api, config_volume=heat_api_cfn hash=426ac20c50a9ca660b40446eeb192c50", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 12:38:08,767 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 12:38:08,768 DEBUG: 28744 -- Updating config hash for aodh_listener, config_volume=heat_api_cfn hash=89e02684931d1c3e1b82a07465f122c1", > "2018-10-02 12:38:08,768 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,768 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 12:38:08,768 DEBUG: 28744 -- Updating config hash for neutron_api, config_volume=heat_api_cfn hash=be920786e43d9d72b682ca1d9a274b7c", > "2018-10-02 12:38:08,768 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-10-02 12:38:08,768 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-10-02 12:38:08,768 DEBUG: 28744 -- Updating config hash for heat_api_cfn, config_volume=heat_api_cfn hash=b014066e50cdff4c80a4acf5d9f78eb6", > "2018-10-02 12:38:08,770 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-10-02 12:38:08,771 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-10-02 12:38:08,771 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-10-02 12:38:08,771 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-10-02 12:38:08,771 DEBUG: 28744 -- Updating config hash for gnocchi_api, config_volume=heat_api_cfn hash=667b52d3c739794e25cd4d14cfa40344", > "2018-10-02 12:38:08,771 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-10-02 12:38:08,771 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-10-02 12:38:08,771 DEBUG: 28744 -- Updating config hash for gnocchi_statsd, config_volume=heat_api_cfn hash=667b52d3c739794e25cd4d14cfa40344", > "2018-10-02 12:38:08,771 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 12:38:08,771 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 12:38:08,771 DEBUG: 28744 -- Updating config hash for cinder_backup_restart_bundle, config_volume=heat_api_cfn hash=64cea007f97ad23d4a3ac3fdb333b631", > "2018-10-02 12:38:08,772 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-10-02 12:38:08,772 DEBUG: 28744 -- Updating config hash for gnocchi_metricd, config_volume=heat_api_cfn hash=667b52d3c739794e25cd4d14cfa40344", > "2018-10-02 12:38:08,772 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-10-02 12:38:08,772 DEBUG: 28744 -- Updating config hash for gnocchi_db_sync, config_volume=heat_api_cfn hash=667b52d3c739794e25cd4d14cfa40344", > "2018-10-02 12:38:08,772 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-10-02 12:38:08,772 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-10-02 12:38:08,772 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/ceilometer/etc/ceilometer.md5sum for config_volume /var/lib/config-data/ceilometer/etc/ceilometer", > "2018-10-02 12:38:08,772 DEBUG: 28744 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 12:38:08,772 DEBUG: 28744 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 12:38:08,772 DEBUG: 28744 -- Updating config hash for cinder_volume_restart_bundle, config_volume=heat_api_cfn hash=64cea007f97ad23d4a3ac3fdb333b631", > "2018-10-02 12:38:08,773 DEBUG: 28744 -- Updating config hash for cinder_api_cron, config_volume=heat_api_cfn hash=64cea007f97ad23d4a3ac3fdb333b631" > ] >} >2018-10-02 08:38:10,318 p=1004 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 08:38:10,318 p=1004 u=mistral | Tuesday 02 October 2018 08:38:10 -0400 (0:00:01.307) 0:09:23.052 ******* >2018-10-02 08:38:10,354 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:10,382 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:10,398 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:10,428 p=1004 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 08:38:10,428 p=1004 u=mistral | Tuesday 02 October 2018 08:38:10 -0400 (0:00:00.109) 0:09:23.161 ******* >2018-10-02 08:38:10,461 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:38:10,488 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:38:10,507 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:38:10,538 p=1004 u=mistral | TASK [Start containers for step 1] ********************************************* >2018-10-02 08:38:10,538 p=1004 u=mistral | Tuesday 02 October 2018 08:38:10 -0400 (0:00:00.109) 0:09:23.271 ******* >2018-10-02 08:38:11,109 p=1004 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:38:11,110 p=1004 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:38:39,602 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:38:39,632 p=1004 u=mistral | TASK [Debug output for task: Start containers for step 1] ********************** >2018-10-02 08:38:39,633 p=1004 u=mistral | Tuesday 02 October 2018 08:38:39 -0400 (0:00:29.094) 0:09:52.366 ******* >2018-10-02 08:38:39,775 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-backup ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-backup", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "58cfa97883f0: Already exists", > "b22bc33202f5: Pulling fs layer", > "b22bc33202f5: Download complete", > "b22bc33202f5: Pull complete", > "Digest: sha256:9be80516b13b878894cae03aae4bd4f039c4deace2065b4f76e804e2272b208f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-volume ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-volume", > "c3ba3ad5e66e: Pulling fs layer", > "c3ba3ad5e66e: Verifying Checksum", > "c3ba3ad5e66e: Download complete", > "c3ba3ad5e66e: Pull complete", > "Digest: sha256:d507723333640d3a4288adc083ee03560e5a216c19584c673f802cab5ee4e6bc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", > "stdout: ", > "stdout: 56aebeb365a21973e35aa010c40002171c8c17724f743e243b89952b35c0ea5c", > "stdout: 31905c328379305703b0de8aafb4517819953d4027a56756b04b774663b1b52f", > "stdout: Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...", > "OK", > "Filling help tables...", > "Creating OpenGIS required SP-s...", > "To start mysqld at boot time you have to copy", > "support-files/mysql.server to the right place for your system", > "PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !", > "To do so, start the server, then issue the following commands:", > "'/usr/bin/mysqladmin' -u root password 'new-password'", > "'/usr/bin/mysqladmin' -u root -h controller-0 password 'new-password'", > "Alternatively you can run:", > "'/usr/bin/mysql_secure_installation'", > "which will also give you the option of removing the test", > "databases and anonymous user created by default. This is", > "strongly recommended for production servers.", > "See the MariaDB Knowledgebase at http://mariadb.com/kb or the", > "MySQL manual for more instructions.", > "You can start the MariaDB daemon with:", > "cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'", > "You can test the MariaDB daemon with mysql-test-run.pl", > "cd '/usr/mysql-test' ; perl mysql-test-run.pl", > "Please report any problems at http://mariadb.org/jira", > "The latest information about MariaDB is available at http://mariadb.org/.", > "You can find additional information about the MySQL part at:", > "http://dev.mysql.com", > "Consider joining MariaDB's strong and vibrant community:", > "https://mariadb.org/get-involved/", > "181002 12:38:30 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "181002 12:38:30 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "spawn mysql_secure_installation", > "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB", > " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", > "In order to log into MariaDB to secure it, we'll need the current", > "password for the root user. If you've just installed MariaDB, and", > "you haven't set the root password yet, the password will be blank,", > "so you should just press enter here.", > "Enter current password for root (enter for none): ", > "OK, successfully used password, moving on...", > "Setting the root password ensures that nobody can log into the MariaDB", > "root user without the proper authorisation.", > "Set root password? [Y/n] y", > "New password: ", > "Re-enter new password: ", > "Password updated successfully!", > "Reloading privilege tables..", > " ... Success!", > "By default, a MariaDB installation has an anonymous user, allowing anyone", > "to log into MariaDB without having to have a user account created for", > "them. This is intended only for testing, and to make the installation", > "go a bit smoother. You should remove them before moving into a", > "production environment.", > "Remove anonymous users? [Y/n] y", > "Normally, root should only be allowed to connect from 'localhost'. This", > "ensures that someone cannot guess at the root password from the network.", > "Disallow root login remotely? [Y/n] n", > " ... skipping.", > "By default, MariaDB comes with a database named 'test' that anyone can", > "access. This is also intended only for testing, and should be removed", > "before moving into a production environment.", > "Remove test database and access to it? [Y/n] y", > " - Dropping test database...", > " - Removing privileges on test database...", > "Reloading the privilege tables will ensure that all changes made so far", > "will take effect immediately.", > "Reload privilege tables now? [Y/n] y", > "Cleaning up...", > "All done! If you've completed all of the above steps, your MariaDB", > "installation should now be secure.", > "Thanks for using MariaDB!", > "181002 12:38:33 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "181002 12:38:34 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "181002 12:38:34 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "mysqld is alive", > "181002 12:38:37 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "stderr: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Copying /dev/null to /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Setting permission for /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Deleting /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/galera.cnf to /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/sysconfig/clustercheck to /etc/sysconfig/clustercheck", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/root/.my.cnf to /root/.my.cnf", > "INFO:__main__:Writing out command to execute", > "2018-10-02 12:38:17 140039823345856 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-10-02 12:38:17 140039823345856 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 42 ...", > "2018-10-02 12:38:22 139622369691840 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-10-02 12:38:22 139622369691840 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 71 ...", > "2018-10-02 12:38:26 140613621844160 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-10-02 12:38:26 140613621844160 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 101 ...", > "/usr/bin/mysqld_safe: line 755: ulimit: -1: invalid option", > "ulimit: usage: ulimit [-SHacdefilmnpqrstuvx] [limit]" > ] >} >2018-10-02 08:38:39,789 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-10-02 08:38:39,830 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-10-02 08:38:39,856 p=1004 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks1.json exists] ******** >2018-10-02 08:38:39,856 p=1004 u=mistral | Tuesday 02 October 2018 08:38:39 -0400 (0:00:00.223) 0:09:52.590 ******* >2018-10-02 08:38:40,152 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:38:40,208 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:38:40,209 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:38:40,232 p=1004 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 1] ******************** >2018-10-02 08:38:40,233 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.376) 0:09:52.966 ******* >2018-10-02 08:38:40,266 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:38:40,292 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:38:40,306 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:38:40,332 p=1004 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (bootstrap tasks) for step 1] *** >2018-10-02 08:38:40,333 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.099) 0:09:53.066 ******* >2018-10-02 08:38:40,364 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:38:40,391 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:38:40,403 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:38:40,409 p=1004 u=mistral | PLAY [External deployment step 2] ********************************************** >2018-10-02 08:38:40,428 p=1004 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-10-02 08:38:40,428 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.095) 0:09:53.162 ******* >2018-10-02 08:38:40,447 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,460 p=1004 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-10-02 08:38:40,460 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.031) 0:09:53.194 ******* >2018-10-02 08:38:40,504 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,505 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,506 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,524 p=1004 u=mistral | TASK [generate inventory] ****************************************************** >2018-10-02 08:38:40,525 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.064) 0:09:53.258 ******* >2018-10-02 08:38:40,544 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,558 p=1004 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-10-02 08:38:40,558 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.033) 0:09:53.292 ******* >2018-10-02 08:38:40,579 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,592 p=1004 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-10-02 08:38:40,592 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.033) 0:09:53.325 ******* >2018-10-02 08:38:40,612 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,625 p=1004 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-10-02 08:38:40,625 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.033) 0:09:53.359 ******* >2018-10-02 08:38:40,646 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,659 p=1004 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-10-02 08:38:40,660 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.034) 0:09:53.393 ******* >2018-10-02 08:38:40,678 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,692 p=1004 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-10-02 08:38:40,692 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.032) 0:09:53.425 ******* >2018-10-02 08:38:40,711 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,725 p=1004 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-10-02 08:38:40,725 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.033) 0:09:53.458 ******* >2018-10-02 08:38:40,745 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:38:40,759 p=1004 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-10-02 08:38:40,759 p=1004 u=mistral | Tuesday 02 October 2018 08:38:40 -0400 (0:00:00.034) 0:09:53.493 ******* >2018-10-02 08:38:43,435 p=1004 u=mistral | changed: [undercloud] => {"changed": true, "cmd": "ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_command.log\" ANSIBLE_CONFIG=\"/var/lib/mistral/overcloud/ansible.cfg\" ANSIBLE_REMOTE_TEMP=/tmp/nodes_uuid_tmp ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml /var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_playbook.yml", "delta": "0:00:02.499956", "end": "2018-10-02 08:38:43.416123", "rc": 0, "start": "2018-10-02 08:38:40.916167", "stderr": "", "stderr_lines": [], "stdout": "\nPLAY [all] *********************************************************************\n\nTASK [set nodes data] **********************************************************\nTuesday 02 October 2018 08:38:42 -0400 (0:00:00.089) 0:00:00.089 ******* \nok: [ceph-0]\nok: [compute-0]\nok: [controller-0]\n\nTASK [register machine id] *****************************************************\nTuesday 02 October 2018 08:38:42 -0400 (0:00:00.070) 0:00:00.159 ******* \nchanged: [compute-0]\nchanged: [ceph-0]\nchanged: [controller-0]\n\nTASK [generate host vars from nodes data] **************************************\nTuesday 02 October 2018 08:38:42 -0400 (0:00:00.321) 0:00:00.480 ******* \nchanged: [controller-0 -> localhost]\nchanged: [compute-0 -> localhost]\nchanged: [ceph-0 -> localhost]\n\nPLAY RECAP *********************************************************************\nceph-0 : ok=3 changed=2 unreachable=0 failed=0 \ncompute-0 : ok=3 changed=2 unreachable=0 failed=0 \ncontroller-0 : ok=3 changed=2 unreachable=0 failed=0 \n\nTuesday 02 October 2018 08:38:43 -0400 (0:00:00.612) 0:00:01.092 ******* \n=============================================================================== ", "stdout_lines": ["", "PLAY [all] *********************************************************************", "", "TASK [set nodes data] **********************************************************", "Tuesday 02 October 2018 08:38:42 -0400 (0:00:00.089) 0:00:00.089 ******* ", "ok: [ceph-0]", "ok: [compute-0]", "ok: [controller-0]", "", "TASK [register machine id] *****************************************************", "Tuesday 02 October 2018 08:38:42 -0400 (0:00:00.070) 0:00:00.159 ******* ", "changed: [compute-0]", "changed: [ceph-0]", "changed: [controller-0]", "", "TASK [generate host vars from nodes data] **************************************", "Tuesday 02 October 2018 08:38:42 -0400 (0:00:00.321) 0:00:00.480 ******* ", "changed: [controller-0 -> localhost]", "changed: [compute-0 -> localhost]", "changed: [ceph-0 -> localhost]", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=3 changed=2 unreachable=0 failed=0 ", "compute-0 : ok=3 changed=2 unreachable=0 failed=0 ", "controller-0 : ok=3 changed=2 unreachable=0 failed=0 ", "", "Tuesday 02 October 2018 08:38:43 -0400 (0:00:00.612) 0:00:01.092 ******* ", "=============================================================================== "]} >2018-10-02 08:38:43,449 p=1004 u=mistral | TASK [set ceph-ansible params from Heat] *************************************** >2018-10-02 08:38:43,449 p=1004 u=mistral | Tuesday 02 October 2018 08:38:43 -0400 (0:00:02.689) 0:09:56.182 ******* >2018-10-02 08:38:43,483 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbook_verbosity": 2, "ceph_ansible_playbooks_param": ["default"]}, "changed": false} >2018-10-02 08:38:43,498 p=1004 u=mistral | TASK [set ceph-ansible playbooks] ********************************************** >2018-10-02 08:38:43,499 p=1004 u=mistral | Tuesday 02 October 2018 08:38:43 -0400 (0:00:00.049) 0:09:56.232 ******* >2018-10-02 08:38:43,533 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbooks": ["/usr/share/ceph-ansible/site-docker.yml.sample"]}, "changed": false} >2018-10-02 08:38:43,548 p=1004 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-10-02 08:38:43,548 p=1004 u=mistral | Tuesday 02 October 2018 08:38:43 -0400 (0:00:00.049) 0:09:56.281 ******* >2018-10-02 08:38:43,584 p=1004 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_command": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_CALLBACK_PLUGINS=/usr/share/ceph-ansible/plugins/callback/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ANSIBLE_REMOTE_TEMP=/tmp/ceph_ansible_tmp ANSIBLE_FORKS=25 ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml"}, "changed": false} >2018-10-02 08:38:43,597 p=1004 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-10-02 08:38:43,598 p=1004 u=mistral | Tuesday 02 October 2018 08:38:43 -0400 (0:00:00.049) 0:09:56.331 ******* >2018-10-02 08:42:56,707 p=1004 u=mistral | changed: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": true, "cmd": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_CALLBACK_PLUGINS=/usr/share/ceph-ansible/plugins/callback/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ANSIBLE_REMOTE_TEMP=/tmp/ceph_ansible_tmp ANSIBLE_FORKS=25 ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml /usr/share/ceph-ansible/site-docker.yml.sample", "delta": "0:04:12.675607", "end": "2018-10-02 08:42:56.418611", "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "rc": 0, "start": "2018-10-02 08:38:43.743004", "stderr": "[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \nThis feature will be removed in a future release. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n [WARNING]: Could not match supplied host pattern, ignoring: agents\n [WARNING]: Could not match supplied host pattern, ignoring: mdss\n [WARNING]: Could not match supplied host pattern, ignoring: rgws\n [WARNING]: Could not match supplied host pattern, ignoring: nfss\n [WARNING]: Could not match supplied host pattern, ignoring: restapis\n [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors\n [WARNING]: Could not match supplied host pattern, ignoring: iscsigws\n [WARNING]: Could not match supplied host pattern, ignoring: iscsi-gws\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\n}}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\n}}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.", "stderr_lines": ["[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use ", "'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. ", "This feature will be removed in a future release. Deprecation warnings can be ", "disabled by setting deprecation_warnings=False in ansible.cfg.", " [WARNING]: Could not match supplied host pattern, ignoring: agents", " [WARNING]: Could not match supplied host pattern, ignoring: mdss", " [WARNING]: Could not match supplied host pattern, ignoring: rgws", " [WARNING]: Could not match supplied host pattern, ignoring: nfss", " [WARNING]: Could not match supplied host pattern, ignoring: restapis", " [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors", " [WARNING]: Could not match supplied host pattern, ignoring: iscsigws", " [WARNING]: Could not match supplied host pattern, ignoring: iscsi-gws", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0", "}}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0", "}}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg."], "stdout": "ansible-playbook 2.5.7\n config file = /usr/share/ceph-ansible/ansible.cfg\n configured module search path = [u'/usr/share/ceph-ansible/library']\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\n executable location = /usr/bin/ansible-playbook\n python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]\nUsing /usr/share/ceph-ansible/ansible.cfg as config file\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml\n\nPLAYBOOK: site-docker.yml.sample ***********************************************\n12 plays in /usr/share/ceph-ansible/site-docker.yml.sample\n\nPLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,iscsi-gws,mgrs] ***\n\nTASK [gather facts] ************************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:25\nTuesday 02 October 2018 08:38:47 -0400 (0:00:00.215) 0:00:00.215 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [gather and delegate facts] ***********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:30\nTuesday 02 October 2018 08:38:47 -0400 (0:00:00.086) 0:00:00.302 ******* \nok: [controller-0 -> 192.168.24.12] => (item=compute-0)\nok: [controller-0 -> 192.168.24.10] => (item=controller-0)\nok: [controller-0 -> 192.168.24.8] => (item=ceph-0)\n\nTASK [check if it is atomic host] **********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:39\nTuesday 02 October 2018 08:39:00 -0400 (0:00:13.098) 0:00:13.400 ******* \nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [set_fact is_atomic] ******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:46\nTuesday 02 October 2018 08:39:01 -0400 (0:00:00.433) 0:00:13.833 ******* \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nTASK [pull rhceph image] *******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:66\nTuesday 02 October 2018 08:39:01 -0400 (0:00:00.251) 0:00:14.085 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:76\nTuesday 02 October 2018 08:39:01 -0400 (0:00:00.122) 0:00:14.207 ******* \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20181002083901Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nTuesday 02 October 2018 08:39:01 -0400 (0:00:00.247) 0:00:14.455 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.031513\", \"end\": \"2018-10-02 12:39:02.007429\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:39:01.975916\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.349) 0:00:14.804 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.047) 0:00:14.851 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.047) 0:00:14.899 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.049) 0:00:14.948 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.024345\", \"end\": \"2018-10-02 12:39:02.404826\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:39:02.380481\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.255) 0:00:15.203 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.051) 0:00:15.255 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.048) 0:00:15.304 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.045) 0:00:15.350 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.046) 0:00:15.396 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.053) 0:00:15.450 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.051) 0:00:15.501 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.046) 0:00:15.547 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.047) 0:00:15.595 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.045) 0:00:15.640 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.047) 0:00:15.688 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nTuesday 02 October 2018 08:39:02 -0400 (0:00:00.046) 0:00:15.734 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.050) 0:00:15.784 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.048) 0:00:15.832 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.050) 0:00:15.882 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.051) 0:00:15.934 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.058) 0:00:15.993 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.053) 0:00:16.046 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.051) 0:00:16.098 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.050) 0:00:16.149 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.051) 0:00:16.200 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.051) 0:00:16.251 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.047) 0:00:16.299 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.045) 0:00:16.345 ******* \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.230) 0:00:16.576 ******* \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.072) 0:00:16.648 ******* \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nTuesday 02 October 2018 08:39:03 -0400 (0:00:00.084) 0:00:16.733 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nTuesday 02 October 2018 08:39:04 -0400 (0:00:00.087) 0:00:16.820 ******* \nok: [controller-0 -> 192.168.24.10] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nTuesday 02 October 2018 08:39:04 -0400 (0:00:00.160) 0:00:16.980 ******* \nok: [controller-0 -> 192.168.24.10] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.025706\", \"end\": \"2018-10-02 12:39:04.446788\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-10-02 12:39:04.421082\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nTuesday 02 October 2018 08:39:04 -0400 (0:00:00.275) 0:00:17.255 ******* \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nTuesday 02 October 2018 08:39:04 -0400 (0:00:00.192) 0:00:17.447 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nTuesday 02 October 2018 08:39:04 -0400 (0:00:00.053) 0:00:17.501 ******* \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.424) 0:00:17.925 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.053) 0:00:17.978 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.048) 0:00:18.027 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.080) 0:00:18.108 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.049) 0:00:18.157 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.048) 0:00:18.205 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.044) 0:00:18.250 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.055) 0:00:18.306 ******* \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.181) 0:00:18.487 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.047) 0:00:18.535 ******* \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nTuesday 02 October 2018 08:39:05 -0400 (0:00:00.180) 0:00:18.716 ******* \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.185) 0:00:18.901 ******* \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.248) 0:00:19.149 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.048) 0:00:19.198 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.046) 0:00:19.245 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.047) 0:00:19.292 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.044) 0:00:19.337 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.046) 0:00:19.384 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.044) 0:00:19.428 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.048) 0:00:19.477 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rgw_hostname] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.068) 0:00:19.545 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.044) 0:00:19.589 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nTuesday 02 October 2018 08:39:06 -0400 (0:00:00.067) 0:00:19.657 ******* \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nTuesday 02 October 2018 08:39:09 -0400 (0:00:02.110) 0:00:21.768 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nTuesday 02 October 2018 08:39:09 -0400 (0:00:00.055) 0:00:21.824 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nTuesday 02 October 2018 08:39:09 -0400 (0:00:00.062) 0:00:21.886 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nTuesday 02 October 2018 08:39:09 -0400 (0:00:00.051) 0:00:21.937 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nTuesday 02 October 2018 08:39:09 -0400 (0:00:00.050) 0:00:21.988 ******* \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nTuesday 02 October 2018 08:39:09 -0400 (0:00:00.426) 0:00:22.415 ******* \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nTuesday 02 October 2018 08:39:09 -0400 (0:00:00.081) 0:00:22.496 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nTuesday 02 October 2018 08:39:09 -0400 (0:00:00.045) 0:00:22.542 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.024330\", \"end\": \"2018-10-02 12:39:09.999720\", \"rc\": 0, \"start\": \"2018-10-02 12:39:09.975390\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 8633870/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 8633870/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nTuesday 02 October 2018 08:39:10 -0400 (0:00:00.256) 0:00:22.798 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nTuesday 02 October 2018 08:39:10 -0400 (0:00:00.070) 0:00:22.869 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.024567\", \"end\": \"2018-10-02 12:39:10.319930\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:39:10.295363\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nTuesday 02 October 2018 08:39:10 -0400 (0:00:00.252) 0:00:23.121 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nTuesday 02 October 2018 08:39:10 -0400 (0:00:00.085) 0:00:23.207 ******* \nok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nTuesday 02 October 2018 08:39:10 -0400 (0:00:00.137) 0:00:23.344 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nTuesday 02 October 2018 08:39:10 -0400 (0:00:00.084) 0:00:23.429 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nTuesday 02 October 2018 08:39:10 -0400 (0:00:00.096) 0:00:23.526 ******* \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nTuesday 02 October 2018 08:39:12 -0400 (0:00:01.235) 0:00:24.761 ******* \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.304) 0:00:25.066 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.049) 0:00:25.116 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.044) 0:00:25.160 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.051) 0:00:25.212 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.052) 0:00:25.264 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.055) 0:00:25.320 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.058) 0:00:25.378 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.049) 0:00:25.428 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.053) 0:00:25.481 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.056) 0:00:25.537 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.048) 0:00:25.586 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.053) 0:00:25.639 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nTuesday 02 October 2018 08:39:12 -0400 (0:00:00.051) 0:00:25.691 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.059) 0:00:25.750 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.051) 0:00:25.802 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.050) 0:00:25.853 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.057) 0:00:25.910 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.059) 0:00:25.970 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.050) 0:00:26.021 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.051) 0:00:26.072 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.053) 0:00:26.125 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.048) 0:00:26.173 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.050) 0:00:26.224 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.053) 0:00:26.278 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.048) 0:00:26.327 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.051) 0:00:26.378 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.050) 0:00:26.429 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.055) 0:00:26.484 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.052) 0:00:26.537 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nTuesday 02 October 2018 08:39:13 -0400 (0:00:00.049) 0:00:26.586 ******* \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:13.823581\", \"end\": \"2018-10-02 12:39:27.938363\", \"rc\": 0, \"start\": \"2018-10-02 12:39:14.114782\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Verifying Checksum\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Verifying Checksum\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Verifying Checksum\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Verifying Checksum\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nTuesday 02 October 2018 08:39:27 -0400 (0:00:14.156) 0:00:40.743 ******* \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.027445\", \"end\": \"2018-10-02 12:39:28.211179\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:39:28.183734\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.282) 0:00:41.025 ******* \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.084) 0:00:41.109 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.057) 0:00:41.167 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.055) 0:00:41.222 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.052) 0:00:41.274 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.048) 0:00:41.323 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.054) 0:00:41.378 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.051) 0:00:41.430 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.058) 0:00:41.488 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.052) 0:00:41.541 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.051) 0:00:41.592 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.051) 0:00:41.644 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nTuesday 02 October 2018 08:39:28 -0400 (0:00:00.052) 0:00:41.696 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.460525\", \"end\": \"2018-10-02 12:39:29.600571\", \"rc\": 0, \"start\": \"2018-10-02 12:39:29.140046\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nTuesday 02 October 2018 08:39:29 -0400 (0:00:00.704) 0:00:42.401 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nTuesday 02 October 2018 08:39:29 -0400 (0:00:00.082) 0:00:42.483 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nTuesday 02 October 2018 08:39:29 -0400 (0:00:00.050) 0:00:42.534 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nTuesday 02 October 2018 08:39:29 -0400 (0:00:00.048) 0:00:42.582 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nTuesday 02 October 2018 08:39:29 -0400 (0:00:00.082) 0:00:42.665 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nTuesday 02 October 2018 08:39:29 -0400 (0:00:00.056) 0:00:42.721 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nTuesday 02 October 2018 08:39:30 -0400 (0:00:00.047) 0:00:42.769 ******* \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nTuesday 02 October 2018 08:39:30 -0400 (0:00:00.949) 0:00:43.718 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nTuesday 02 October 2018 08:39:31 -0400 (0:00:00.055) 0:00:43.773 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nTuesday 02 October 2018 08:39:31 -0400 (0:00:00.051) 0:00:43.824 ******* \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nTuesday 02 October 2018 08:39:31 -0400 (0:00:00.214) 0:00:44.039 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nTuesday 02 October 2018 08:39:31 -0400 (0:00:00.055) 0:00:44.095 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nTuesday 02 October 2018 08:39:31 -0400 (0:00:00.048) 0:00:44.143 ******* \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nTuesday 02 October 2018 08:39:31 -0400 (0:00:00.255) 0:00:44.398 ******* \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"d7acef6abeb4e7853e1cf2b7e41f2f58868cad4a\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"a31e326b2b79369b2901aa2d0f318a37\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538483971.7-281398146065481/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nTuesday 02 October 2018 08:39:34 -0400 (0:00:02.513) 0:00:46.912 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.053) 0:00:46.965 ******* \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.079) 0:00:47.044 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate monitor initial keyring] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.059) 0:00:47.103 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : read monitor initial keyring if it already exists] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.061) 0:00:47.165 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create monitor initial keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.052) 0:00:47.218 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set initial monitor key permissions] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.052) 0:00:47.271 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create (and fix ownership of) monitor directory] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.049) 0:00:47.321 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.050) 0:00:47.371 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.051) 0:00:47.423 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create custom admin keyring] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.055) 0:00:47.478 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set ownership of admin keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.055) 0:00:47.533 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : import admin keyring into mon keyring] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.054) 0:00:47.588 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs with keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.052) 0:00:47.641 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs without keyring] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113\nTuesday 02 October 2018 08:39:34 -0400 (0:00:00.052) 0:00:47.693 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.061) 0:00:47.755 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add ceph-mon systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.052) 0:00:47.807 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : start the monitor service] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.052) 0:00:47.860 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : enable the ceph-mon.target service] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.051) 0:00:47.912 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : include ceph_keys.yml] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.051) 0:00:47.963 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : collect all the pools] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.054) 0:00:48.018 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : secure the cluster] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.058) 0:00:48.077 ******* \n\nTASK [ceph-mon : set_fact ceph_config_keys] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.055) 0:00:48.132 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : register rbd bootstrap key] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:11\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.081) 0:00:48.214 ******* \nok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:17\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.094) 0:00:48.308 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : stat for ceph config and keys] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:22\nTuesday 02 October 2018 08:39:35 -0400 (0:00:00.088) 0:00:48.397 ******* \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-mon : try to copy ceph keys] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:33\nTuesday 02 October 2018 08:39:36 -0400 (0:00:00.943) 0:00:49.341 ******* \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with default ceph.conf] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2\nTuesday 02 October 2018 08:39:36 -0400 (0:00:00.153) 0:00:49.494 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with custom ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18\nTuesday 02 October 2018 08:39:36 -0400 (0:00:00.055) 0:00:49.550 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : delete populate-kv-store docker] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36\nTuesday 02 October 2018 08:39:36 -0400 (0:00:00.057) 0:00:49.607 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43\nTuesday 02 October 2018 08:39:36 -0400 (0:00:00.046) 0:00:49.654 ******* \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"30dd79ca23c7e5e775a5e6dab299d35ee19c6909\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"e0f5a6276ad9be3c40dea6db9c92e5a5\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 887, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538483976.95-259272916863923/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : systemd start mon container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54\nTuesday 02 October 2018 08:39:37 -0400 (0:00:00.872) 0:00:50.526 ******* \nchanged: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dmon.slice docker.service basic.target systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --memory=3g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.15 -e CLUSTER=ceph -e FSID=4398e5b0-c63c-11e8-b95a-525400c8bd81 -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-12 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/bin/rm ; argv[]=/bin/rm -f /var/run/ceph/ceph-mon.controller-0.asok ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127792\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127792\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mon : configure ceph profile.d aliases] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2\nTuesday 02 October 2018 08:39:38 -0400 (0:00:00.702) 0:00:51.229 ******* \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538483978.52-29330794934663/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : wait for monitor socket to exist] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12\nTuesday 02 October 2018 08:39:39 -0400 (0:00:00.552) 0:00:51.781 ******* \nFAILED - RETRYING: wait for monitor socket to exist (5 retries left).\nchanged: [controller-0] => {\"attempts\": 2, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.078298\", \"end\": \"2018-10-02 12:39:54.690204\", \"rc\": 0, \"start\": \"2018-10-02 12:39:54.611906\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 14h/20d\\tInode: 333080 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-10-02 12:39:39.563714788 +0000\\nModify: 2018-10-02 12:39:39.563714788 +0000\\nChange: 2018-10-02 12:39:39.563714788 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 14h/20d\\tInode: 333080 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-10-02 12:39:39.563714788 +0000\", \"Modify: 2018-10-02 12:39:39.563714788 +0000\", \"Change: 2018-10-02 12:39:39.563714788 +0000\", \" Birth: -\"]}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19\nTuesday 02 October 2018 08:39:54 -0400 (0:00:15.711) 0:01:07.493 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29\nTuesday 02 October 2018 08:39:54 -0400 (0:00:00.093) 0:01:07.586 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39\nTuesday 02 October 2018 08:39:54 -0400 (0:00:00.094) 0:01:07.681 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.15\"], \"delta\": \"0:00:00.185103\", \"end\": \"2018-10-02 12:39:55.500572\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:39:55.315469\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49\nTuesday 02 October 2018 08:39:55 -0400 (0:00:00.620) 0:01:08.301 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59\nTuesday 02 October 2018 08:39:55 -0400 (0:00:00.054) 0:01:08.356 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69\nTuesday 02 October 2018 08:39:55 -0400 (0:00:00.050) 0:01:08.406 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : push ceph files to the ansible server] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2\nTuesday 02 October 2018 08:39:55 -0400 (0:00:00.051) 0:01:08.457 ******* \nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": true, \"checksum\": \"d677a326bd647888546790f10e2cedd45b16b16c\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"646a5e052b42e51b88bae71199ef2c70\", \"remote_checksum\": \"d677a326bd647888546790f10e2cedd45b16b16c\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": true, \"checksum\": \"55ce938694f0ed88cb9c4903bdb60b986ace7379\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"ccd6e55e13b5360a1ecae7b8e03bf9a5\", \"remote_checksum\": \"55ce938694f0ed88cb9c4903bdb60b986ace7379\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"f28d2d0af61547531ab0fa31ff23aca020f498eb\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"096130d29629dd16899b5da08c7a169f\", \"remote_checksum\": \"f28d2d0af61547531ab0fa31ff23aca020f498eb\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"4ad6235f1694fb6b72596dffe07b7a3347c382b4\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"97fe4cceddcdc2d86e0280a1ab8e043f\", \"remote_checksum\": \"4ad6235f1694fb6b72596dffe07b7a3347c382b4\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"4d16e08847d6079bcd8caa2adf07e9012cb0f41e\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"3d4af3a8907c988c7836372c7316a585\", \"remote_checksum\": \"4d16e08847d6079bcd8caa2adf07e9012cb0f41e\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"5255ad2e079bcf92a5703629e8cbeb93fa79b47a\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"d4da3e0de49fbf15ede7c6d2d32e75d0\", \"remote_checksum\": \"5255ad2e079bcf92a5703629e8cbeb93fa79b47a\", \"remote_md5sum\": null}\n\nTASK [ceph-mon : create ceph rest api keyring when mon is containerized] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:84\nTuesday 02 October 2018 08:39:57 -0400 (0:00:01.383) 0:01:09.841 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create ceph mgr keyring(s) when mon is containerized] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:97\nTuesday 02 October 2018 08:39:57 -0400 (0:00:00.050) 0:01:09.892 ******* \nok: [controller-0] => (item=controller-0) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"get-or-create\", \"mgr.controller-0\", \"mon\", \"allow profile mgr\", \"osd\", \"allow *\", \"mds\", \"allow *\", \"-o\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"], \"delta\": \"0:00:00.380099\", \"end\": \"2018-10-02 12:39:57.938971\", \"item\": \"controller-0\", \"rc\": 0, \"start\": \"2018-10-02 12:39:57.558872\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-mon : stat for ceph mgr key(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:109\nTuesday 02 October 2018 08:39:57 -0400 (0:00:00.849) 0:01:10.741 ******* \nok: [controller-0] => (item=controller-0) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"controller-0\", \"stat\": {\"atime\": 1538483997.808753, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"ctime\": 1538483997.9187531, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 73662102, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1538483997.9187531, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744071792120930\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\n\nTASK [ceph-mon : fetch ceph mgr key(s)] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:121\nTuesday 02 October 2018 08:39:58 -0400 (0:00:00.404) 0:01:11.146 ******* \nchanged: [controller-0] => (item={'_ansible_parsed': True, u'stat': {u'charset': u'us-ascii', u'uid': 0, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483997.9187531, u'block_size': 4096, u'inode': 73662102, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': u'18446744071792120930', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1538483997.808753, u'mimetype': u'text/plain', u'ctime': 1538483997.9187531, u'isblk': False, u'checksum': u'8bb7be95a8da65439da12aedf5f2fdd1235025df', u'dev': 64514, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, 'failed': False, u'changed': False, 'item': u'controller-0', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'controller-0'}) => {\"changed\": true, \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.mgr.controller-0.keyring\", \"item\": {\"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"controller-0\", \"stat\": {\"atime\": 1538483997.808753, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"ctime\": 1538483997.9187531, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 73662102, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1538483997.9187531, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744071792120930\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}, \"md5sum\": \"91380060d243fe3cf688ad21a60a8ace\", \"remote_checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"remote_md5sum\": null}\n\nTASK [ceph-mon : configure crush hierarchy] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:2\nTuesday 02 October 2018 08:39:58 -0400 (0:00:00.426) 0:01:11.572 ******* \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create configured crush rules] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:14\nTuesday 02 October 2018 08:39:58 -0400 (0:00:00.059) 0:01:11.632 ******* \nskipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : get id for new default crush rule] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:21\nTuesday 02 October 2018 08:39:58 -0400 (0:00:00.065) 0:01:11.697 ******* \nskipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact info_ceph_default_crush_rule_yaml] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:33\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.067) 0:01:11.765 ******* \nskipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_crush_rule to osd_pool_default_crush_replicated_ruleset if release < luminous else osd_pool_default_crush_rule] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:41\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.062) 0:01:11.827 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : insert new default crush rule into daemon to prevent restart] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:45\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.082) 0:01:11.910 ******* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add new default crush rule to ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:54\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.196) 0:01:12.106 ******* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : get default value for osd_pool_default_pg_num] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:5\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.059) 0:01:12.166 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num with pool_default_pg_num (backward compatibility)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:16\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.058) 0:01:12.225 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num with default_pool_default_pg_num.stdout] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:21\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.053) 0:01:12.279 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num ceph_conf_overrides.global.osd_pool_default_pg_num] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:27\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.061) 0:01:12.340 ******* \nok: [controller-0] => {\"ansible_facts\": {\"osd_pool_default_pg_num\": \"32\"}, \"changed\": false}\n\nTASK [ceph-mon : test if calamari-server is installed] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:2\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.085) 0:01:12.425 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : increase calamari logging level when debug is on] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:18\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.046) 0:01:12.471 ******* \nskipping: [controller-0] => (item=cthulhu) => {\"changed\": false, \"item\": \"cthulhu\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=calamari_web) => {\"changed\": false, \"item\": \"calamari_web\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : initialize the calamari server api] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:29\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.053) 0:01:12.524 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.016) 0:01:12.541 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nTuesday 02 October 2018 08:39:59 -0400 (0:00:00.073) 0:01:12.614 ******* \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"83f7af8323e264039a95f266faedb4a665c8f4ca\", \"dest\": \"/tmp/restart_mon_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"a72fe8d7f7ff92960aa2e96a1b3fe152\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 1398, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538483999.94-68990543604263/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nTuesday 02 October 2018 08:40:00 -0400 (0:00:00.554) 0:01:13.169 ******* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nTuesday 02 October 2018 08:40:00 -0400 (0:00:00.095) 0:01:13.265 ******* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nTuesday 02 October 2018 08:40:00 -0400 (0:00:00.135) 0:01:13.401 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nTuesday 02 October 2018 08:40:00 -0400 (0:00:00.074) 0:01:13.476 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nTuesday 02 October 2018 08:40:00 -0400 (0:00:00.066) 0:01:13.542 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nTuesday 02 October 2018 08:40:00 -0400 (0:00:00.045) 0:01:13.588 ******* \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nTuesday 02 October 2018 08:40:00 -0400 (0:00:00.088) 0:01:13.676 ******* \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.091) 0:01:13.767 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.076) 0:01:13.844 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.073) 0:01:13.918 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.049) 0:01:13.967 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.058) 0:01:14.026 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.060) 0:01:14.086 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.074) 0:01:14.160 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.072) 0:01:14.232 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.049) 0:01:14.282 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.062) 0:01:14.344 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.062) 0:01:14.407 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.077) 0:01:14.485 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.070) 0:01:14.556 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.049) 0:01:14.605 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.058) 0:01:14.664 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nTuesday 02 October 2018 08:40:01 -0400 (0:00:00.054) 0:01:14.718 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nTuesday 02 October 2018 08:40:02 -0400 (0:00:00.080) 0:01:14.799 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nTuesday 02 October 2018 08:40:02 -0400 (0:00:00.077) 0:01:14.877 ******* \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"73c8d33ad2b3c95d77ee4b411e06cae6\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484002.21-134431239702871/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nTuesday 02 October 2018 08:40:02 -0400 (0:00:00.591) 0:01:15.468 ******* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nTuesday 02 October 2018 08:40:02 -0400 (0:00:00.093) 0:01:15.562 ******* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nTuesday 02 October 2018 08:40:02 -0400 (0:00:00.132) 0:01:15.695 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'Complete'] *************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:98\nTuesday 02 October 2018 08:40:03 -0400 (0:00:00.112) 0:01:15.808 ******* \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"end\": \"20181002084003Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mgrs] ********************************************************************\n\nTASK [set ceph manager install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:110\nTuesday 02 October 2018 08:40:03 -0400 (0:00:00.171) 0:01:15.979 ******* \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"start\": \"20181002084003Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nTuesday 02 October 2018 08:40:03 -0400 (0:00:00.092) 0:01:16.071 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.030030\", \"end\": \"2018-10-02 12:40:03.551389\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:03.521359\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"d47994d727c0\", \"stdout_lines\": [\"d47994d727c0\"]}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nTuesday 02 October 2018 08:40:03 -0400 (0:00:00.282) 0:01:16.354 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nTuesday 02 October 2018 08:40:03 -0400 (0:00:00.052) 0:01:16.406 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nTuesday 02 October 2018 08:40:03 -0400 (0:00:00.057) 0:01:16.464 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nTuesday 02 October 2018 08:40:03 -0400 (0:00:00.052) 0:01:16.516 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.025933\", \"end\": \"2018-10-02 12:40:04.113330\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:04.087397\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.401) 0:01:16.917 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.057) 0:01:16.975 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.056) 0:01:17.032 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.242) 0:01:17.274 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.055) 0:01:17.329 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.054) 0:01:17.383 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.054) 0:01:17.438 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.053) 0:01:17.491 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.056) 0:01:17.547 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.053) 0:01:17.601 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.054) 0:01:17.655 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nTuesday 02 October 2018 08:40:04 -0400 (0:00:00.052) 0:01:17.707 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.053) 0:01:17.761 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.052) 0:01:17.814 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.060) 0:01:17.874 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.052) 0:01:17.926 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.053) 0:01:17.980 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.056) 0:01:18.036 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.053) 0:01:18.090 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.057) 0:01:18.147 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.055) 0:01:18.202 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.053) 0:01:18.255 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.052) 0:01:18.308 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.051) 0:01:18.360 ******* \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.245) 0:01:18.606 ******* \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nTuesday 02 October 2018 08:40:05 -0400 (0:00:00.081) 0:01:18.687 ******* \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nTuesday 02 October 2018 08:40:06 -0400 (0:00:00.081) 0:01:18.768 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nTuesday 02 October 2018 08:40:06 -0400 (0:00:00.075) 0:01:18.844 ******* \nok: [controller-0 -> 192.168.24.10] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nTuesday 02 October 2018 08:40:06 -0400 (0:00:00.166) 0:01:19.011 ******* \nok: [controller-0 -> 192.168.24.10] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.388132\", \"end\": \"2018-10-02 12:40:06.857442\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:06.469310\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":1,\\\"active_gid\\\":0,\\\"active_name\\\":\\\"\\\",\\\"active_addr\\\":\\\"-\\\",\\\"available\\\":false,\\\"standbys\\\":[],\\\"modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"available_modules\\\":[],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":1,\\\"active_gid\\\":0,\\\"active_name\\\":\\\"\\\",\\\"active_addr\\\":\\\"-\\\",\\\"available\\\":false,\\\"standbys\\\":[],\\\"modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"available_modules\\\":[],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nTuesday 02 October 2018 08:40:06 -0400 (0:00:00.655) 0:01:19.667 ******* \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nTuesday 02 October 2018 08:40:07 -0400 (0:00:00.195) 0:01:19.863 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nTuesday 02 October 2018 08:40:07 -0400 (0:00:00.056) 0:01:19.919 ******* \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 50, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nTuesday 02 October 2018 08:40:07 -0400 (0:00:00.195) 0:01:20.114 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"-\", \"active_gid\": 0, \"active_name\": \"\", \"available\": false, \"available_modules\": [], \"epoch\": 1, \"modules\": [\"balancer\", \"restful\", \"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-10-02 12:39:39.460029\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"modified\": \"2018-10-02 12:39:39.460029\", \"mons\": [{\"addr\": \"172.17.3.15:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.15:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 1, \"full\": false, \"nearfull\": false, \"num_in_osds\": 0, \"num_osds\": 0, \"num_remapped_pgs\": 0, \"num_up_osds\": 0}}, \"pgmap\": {\"bytes_avail\": 0, \"bytes_total\": 0, \"bytes_used\": 0, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 0, \"num_pools\": 0, \"pgs_by_state\": []}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nTuesday 02 October 2018 08:40:07 -0400 (0:00:00.089) 0:01:20.204 ******* \nok: [controller-0] => {\"ansible_facts\": {\"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88\nTuesday 02 October 2018 08:40:07 -0400 (0:00:00.082) 0:01:20.286 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92\nTuesday 02 October 2018 08:40:07 -0400 (0:00:00.079) 0:01:20.366 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103\nTuesday 02 October 2018 08:40:07 -0400 (0:00:00.053) 0:01:20.420 ******* \nchanged: [controller-0 -> localhost] => {\"changed\": true, \"cmd\": \"echo 4398e5b0-c63c-11e8-b95a-525400c8bd81 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"delta\": \"0:00:00.689161\", \"end\": \"2018-10-02 08:40:08.515819\", \"rc\": 0, \"start\": \"2018-10-02 08:40:07.826658\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"stdout_lines\": [\"4398e5b0-c63c-11e8-b95a-525400c8bd81\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112\nTuesday 02 October 2018 08:40:08 -0400 (0:00:00.901) 0:01:21.322 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124\nTuesday 02 October 2018 08:40:08 -0400 (0:00:00.053) 0:01:21.376 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130\nTuesday 02 October 2018 08:40:08 -0400 (0:00:00.051) 0:01:21.427 ******* \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136\nTuesday 02 October 2018 08:40:08 -0400 (0:00:00.086) 0:01:21.514 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nTuesday 02 October 2018 08:40:08 -0400 (0:00:00.049) 0:01:21.564 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nTuesday 02 October 2018 08:40:08 -0400 (0:00:00.053) 0:01:21.618 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nTuesday 02 October 2018 08:40:08 -0400 (0:00:00.054) 0:01:21.672 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163\nTuesday 02 October 2018 08:40:08 -0400 (0:00:00.055) 0:01:21.727 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173\nTuesday 02 October 2018 08:40:09 -0400 (0:00:00.059) 0:01:21.787 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182\nTuesday 02 October 2018 08:40:09 -0400 (0:00:00.056) 0:01:21.844 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nTuesday 02 October 2018 08:40:09 -0400 (0:00:00.052) 0:01:21.897 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nTuesday 02 October 2018 08:40:09 -0400 (0:00:00.051) 0:01:21.949 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nTuesday 02 October 2018 08:40:09 -0400 (0:00:00.049) 0:01:21.998 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nTuesday 02 October 2018 08:40:09 -0400 (0:00:00.051) 0:01:22.050 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218\nTuesday 02 October 2018 08:40:09 -0400 (0:00:00.061) 0:01:22.112 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rgw_hostname] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225\nTuesday 02 October 2018 08:40:09 -0400 (0:00:00.204) 0:01:22.316 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nTuesday 02 October 2018 08:40:09 -0400 (0:00:00.052) 0:01:22.369 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nTuesday 02 October 2018 08:40:09 -0400 (0:00:00.181) 0:01:22.551 ******* \nok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 160, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 28, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 35, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/run/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 60, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nTuesday 02 October 2018 08:40:12 -0400 (0:00:02.206) 0:01:24.758 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nTuesday 02 October 2018 08:40:12 -0400 (0:00:00.052) 0:01:24.811 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nTuesday 02 October 2018 08:40:12 -0400 (0:00:00.064) 0:01:24.875 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nTuesday 02 October 2018 08:40:12 -0400 (0:00:00.051) 0:01:24.927 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nTuesday 02 October 2018 08:40:12 -0400 (0:00:00.049) 0:01:24.976 ******* \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nTuesday 02 October 2018 08:40:12 -0400 (0:00:00.424) 0:01:25.401 ******* \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nTuesday 02 October 2018 08:40:12 -0400 (0:00:00.081) 0:01:25.483 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nTuesday 02 October 2018 08:40:12 -0400 (0:00:00.047) 0:01:25.530 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.022088\", \"end\": \"2018-10-02 12:40:13.005544\", \"rc\": 0, \"start\": \"2018-10-02 12:40:12.983456\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 8633870/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 8633870/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nTuesday 02 October 2018 08:40:13 -0400 (0:00:00.274) 0:01:25.805 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nTuesday 02 October 2018 08:40:13 -0400 (0:00:00.095) 0:01:25.900 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.025656\", \"end\": \"2018-10-02 12:40:13.374926\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:13.349270\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"d47994d727c0\", \"stdout_lines\": [\"d47994d727c0\"]}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nTuesday 02 October 2018 08:40:13 -0400 (0:00:00.277) 0:01:26.177 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nTuesday 02 October 2018 08:40:13 -0400 (0:00:00.068) 0:01:26.246 ******* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nTuesday 02 October 2018 08:40:13 -0400 (0:00:00.068) 0:01:26.315 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nTuesday 02 October 2018 08:40:13 -0400 (0:00:00.084) 0:01:26.399 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nTuesday 02 October 2018 08:40:13 -0400 (0:00:00.064) 0:01:26.464 ******* \nskipping: [controller-0] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nTuesday 02 October 2018 08:40:13 -0400 (0:00:00.134) 0:01:26.598 ******* \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nTuesday 02 October 2018 08:40:13 -0400 (0:00:00.146) 0:01:26.744 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.048) 0:01:26.793 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.046) 0:01:26.839 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.052) 0:01:26.892 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.056) 0:01:26.949 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.055) 0:01:27.004 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.048) 0:01:27.053 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.048) 0:01:27.102 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.048) 0:01:27.150 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"d47994d727c0\"], \"delta\": \"0:00:00.024196\", \"end\": \"2018-10-02 12:40:14.626563\", \"rc\": 0, \"start\": \"2018-10-02 12:40:14.602367\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57\\\",\\n \\\"Created\\\": \\\"2018-10-02T12:39:38.443855569Z\\\",\\n \\\"Path\\\": \\\"/entrypoint.sh\\\",\\n \\\"Args\\\": [],\\n \\\"State\\\": {\\n \\\"Status\\\": \\\"running\\\",\\n \\\"Running\\\": true,\\n \\\"Paused\\\": false,\\n \\\"Restarting\\\": false,\\n \\\"OOMKilled\\\": false,\\n \\\"Dead\\\": false,\\n \\\"Pid\\\": 45141,\\n \\\"ExitCode\\\": 0,\\n \\\"Error\\\": \\\"\\\",\\n \\\"StartedAt\\\": \\\"2018-10-02T12:39:38.624208881Z\\\",\\n \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\\n },\\n \\\"Image\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/resolv.conf\\\",\\n \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/hostname\\\",\\n \\\"HostsPath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/hosts\\\",\\n \\\"LogPath\\\": \\\"\\\",\\n \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\\n \\\"RestartCount\\\": 0,\\n \\\"Driver\\\": \\\"overlay2\\\",\\n \\\"MountLabel\\\": \\\"\\\",\\n \\\"ProcessLabel\\\": \\\"\\\",\\n \\\"AppArmorProfile\\\": \\\"\\\",\\n \\\"ExecIDs\\\": null,\\n \\\"HostConfig\\\": {\\n \\\"Binds\\\": [\\n \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\\n \\\"/etc/ceph:/etc/ceph:z\\\",\\n \\\"/var/run/ceph:/var/run/ceph:z\\\",\\n \\\"/etc/localtime:/etc/localtime:ro\\\"\\n ],\\n \\\"ContainerIDFile\\\": \\\"\\\",\\n \\\"LogConfig\\\": {\\n \\\"Type\\\": \\\"journald\\\",\\n \\\"Config\\\": {}\\n },\\n \\\"NetworkMode\\\": \\\"host\\\",\\n \\\"PortBindings\\\": {},\\n \\\"RestartPolicy\\\": {\\n \\\"Name\\\": \\\"no\\\",\\n \\\"MaximumRetryCount\\\": 0\\n },\\n \\\"AutoRemove\\\": true,\\n \\\"VolumeDriver\\\": \\\"\\\",\\n \\\"VolumesFrom\\\": null,\\n \\\"CapAdd\\\": null,\\n \\\"CapDrop\\\": null,\\n \\\"Dns\\\": [],\\n \\\"DnsOptions\\\": [],\\n \\\"DnsSearch\\\": [],\\n \\\"ExtraHosts\\\": null,\\n \\\"GroupAdd\\\": null,\\n \\\"IpcMode\\\": \\\"\\\",\\n \\\"Cgroup\\\": \\\"\\\",\\n \\\"Links\\\": null,\\n \\\"OomScoreAdj\\\": 0,\\n \\\"PidMode\\\": \\\"\\\",\\n \\\"Privileged\\\": false,\\n \\\"PublishAllPorts\\\": false,\\n \\\"ReadonlyRootfs\\\": false,\\n \\\"SecurityOpt\\\": null,\\n \\\"UTSMode\\\": \\\"\\\",\\n \\\"UsernsMode\\\": \\\"\\\",\\n \\\"ShmSize\\\": 67108864,\\n \\\"Runtime\\\": \\\"docker-runc\\\",\\n \\\"ConsoleSize\\\": [\\n 0,\\n 0\\n ],\\n \\\"Isolation\\\": \\\"\\\",\\n \\\"CpuShares\\\": 0,\\n \\\"Memory\\\": 3221225472,\\n \\\"NanoCpus\\\": 0,\\n \\\"CgroupParent\\\": \\\"\\\",\\n \\\"BlkioWeight\\\": 0,\\n \\\"BlkioWeightDevice\\\": null,\\n \\\"BlkioDeviceReadBps\\\": null,\\n \\\"BlkioDeviceWriteBps\\\": null,\\n \\\"BlkioDeviceReadIOps\\\": null,\\n \\\"BlkioDeviceWriteIOps\\\": null,\\n \\\"CpuPeriod\\\": 0,\\n \\\"CpuQuota\\\": 100000,\\n \\\"CpuRealtimePeriod\\\": 0,\\n \\\"CpuRealtimeRuntime\\\": 0,\\n \\\"CpusetCpus\\\": \\\"\\\",\\n \\\"CpusetMems\\\": \\\"\\\",\\n \\\"Devices\\\": [],\\n \\\"DiskQuota\\\": 0,\\n \\\"KernelMemory\\\": 0,\\n \\\"MemoryReservation\\\": 0,\\n \\\"MemorySwap\\\": 6442450944,\\n \\\"MemorySwappiness\\\": -1,\\n \\\"OomKillDisable\\\": false,\\n \\\"PidsLimit\\\": 0,\\n \\\"Ulimits\\\": null,\\n \\\"CpuCount\\\": 0,\\n \\\"CpuPercent\\\": 0,\\n \\\"IOMaximumIOps\\\": 0,\\n \\\"IOMaximumBandwidth\\\": 0\\n },\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f-init/diff:/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff:/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/work\\\"\\n }\\n },\\n \\\"Mounts\\\": [\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/lib/ceph\\\",\\n \\\"Destination\\\": \\\"/var/lib/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/ceph\\\",\\n \\\"Destination\\\": \\\"/etc/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/run/ceph\\\",\\n \\\"Destination\\\": \\\"/var/run/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/localtime\\\",\\n \\\"Destination\\\": \\\"/etc/localtime\\\",\\n \\\"Mode\\\": \\\"ro\\\",\\n \\\"RW\\\": false,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n }\\n ],\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"controller-0\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": true,\\n \\\"AttachStderr\\\": true,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"IP_VERSION=4\\\",\\n \\\"MON_IP=172.17.3.15\\\",\\n \\\"CLUSTER=ceph\\\",\\n \\\"FSID=4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\n \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\\n \\\"CEPH_DAEMON=MON\\\",\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-12\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": null,\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"NetworkSettings\\\": {\\n \\\"Bridge\\\": \\\"\\\",\\n \\\"SandboxID\\\": \\\"88005597e5b8601dd06c206a599504f9e06151150e681e9896950ce1dc0e8570\\\",\\n \\\"HairpinMode\\\": false,\\n \\\"LinkLocalIPv6Address\\\": \\\"\\\",\\n \\\"LinkLocalIPv6PrefixLen\\\": 0,\\n \\\"Ports\\\": {},\\n \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\\n \\\"SecondaryIPAddresses\\\": null,\\n \\\"SecondaryIPv6Addresses\\\": null,\\n \\\"EndpointID\\\": \\\"\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"MacAddress\\\": \\\"\\\",\\n \\\"Networks\\\": {\\n \\\"host\\\": {\\n \\\"IPAMConfig\\\": null,\\n \\\"Links\\\": null,\\n \\\"Aliases\\\": null,\\n \\\"NetworkID\\\": \\\"5126de8d808d5c5d8a90d1e72a006d96449de4809ed996069fb1f3b5e4bb5f68\\\",\\n \\\"EndpointID\\\": \\\"fa6cc8203a497c959078fa65db5e9c6f93592bae4497628b9f488f99f597c39a\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"MacAddress\\\": \\\"\\\"\\n }\\n }\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57\\\",\", \" \\\"Created\\\": \\\"2018-10-02T12:39:38.443855569Z\\\",\", \" \\\"Path\\\": \\\"/entrypoint.sh\\\",\", \" \\\"Args\\\": [],\", \" \\\"State\\\": {\", \" \\\"Status\\\": \\\"running\\\",\", \" \\\"Running\\\": true,\", \" \\\"Paused\\\": false,\", \" \\\"Restarting\\\": false,\", \" \\\"OOMKilled\\\": false,\", \" \\\"Dead\\\": false,\", \" \\\"Pid\\\": 45141,\", \" \\\"ExitCode\\\": 0,\", \" \\\"Error\\\": \\\"\\\",\", \" \\\"StartedAt\\\": \\\"2018-10-02T12:39:38.624208881Z\\\",\", \" \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\", \" },\", \" \\\"Image\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/resolv.conf\\\",\", \" \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/hostname\\\",\", \" \\\"HostsPath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/hosts\\\",\", \" \\\"LogPath\\\": \\\"\\\",\", \" \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\", \" \\\"RestartCount\\\": 0,\", \" \\\"Driver\\\": \\\"overlay2\\\",\", \" \\\"MountLabel\\\": \\\"\\\",\", \" \\\"ProcessLabel\\\": \\\"\\\",\", \" \\\"AppArmorProfile\\\": \\\"\\\",\", \" \\\"ExecIDs\\\": null,\", \" \\\"HostConfig\\\": {\", \" \\\"Binds\\\": [\", \" \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\", \" \\\"/etc/ceph:/etc/ceph:z\\\",\", \" \\\"/var/run/ceph:/var/run/ceph:z\\\",\", \" \\\"/etc/localtime:/etc/localtime:ro\\\"\", \" ],\", \" \\\"ContainerIDFile\\\": \\\"\\\",\", \" \\\"LogConfig\\\": {\", \" \\\"Type\\\": \\\"journald\\\",\", \" \\\"Config\\\": {}\", \" },\", \" \\\"NetworkMode\\\": \\\"host\\\",\", \" \\\"PortBindings\\\": {},\", \" \\\"RestartPolicy\\\": {\", \" \\\"Name\\\": \\\"no\\\",\", \" \\\"MaximumRetryCount\\\": 0\", \" },\", \" \\\"AutoRemove\\\": true,\", \" \\\"VolumeDriver\\\": \\\"\\\",\", \" \\\"VolumesFrom\\\": null,\", \" \\\"CapAdd\\\": null,\", \" \\\"CapDrop\\\": null,\", \" \\\"Dns\\\": [],\", \" \\\"DnsOptions\\\": [],\", \" \\\"DnsSearch\\\": [],\", \" \\\"ExtraHosts\\\": null,\", \" \\\"GroupAdd\\\": null,\", \" \\\"IpcMode\\\": \\\"\\\",\", \" \\\"Cgroup\\\": \\\"\\\",\", \" \\\"Links\\\": null,\", \" \\\"OomScoreAdj\\\": 0,\", \" \\\"PidMode\\\": \\\"\\\",\", \" \\\"Privileged\\\": false,\", \" \\\"PublishAllPorts\\\": false,\", \" \\\"ReadonlyRootfs\\\": false,\", \" \\\"SecurityOpt\\\": null,\", \" \\\"UTSMode\\\": \\\"\\\",\", \" \\\"UsernsMode\\\": \\\"\\\",\", \" \\\"ShmSize\\\": 67108864,\", \" \\\"Runtime\\\": \\\"docker-runc\\\",\", \" \\\"ConsoleSize\\\": [\", \" 0,\", \" 0\", \" ],\", \" \\\"Isolation\\\": \\\"\\\",\", \" \\\"CpuShares\\\": 0,\", \" \\\"Memory\\\": 3221225472,\", \" \\\"NanoCpus\\\": 0,\", \" \\\"CgroupParent\\\": \\\"\\\",\", \" \\\"BlkioWeight\\\": 0,\", \" \\\"BlkioWeightDevice\\\": null,\", \" \\\"BlkioDeviceReadBps\\\": null,\", \" \\\"BlkioDeviceWriteBps\\\": null,\", \" \\\"BlkioDeviceReadIOps\\\": null,\", \" \\\"BlkioDeviceWriteIOps\\\": null,\", \" \\\"CpuPeriod\\\": 0,\", \" \\\"CpuQuota\\\": 100000,\", \" \\\"CpuRealtimePeriod\\\": 0,\", \" \\\"CpuRealtimeRuntime\\\": 0,\", \" \\\"CpusetCpus\\\": \\\"\\\",\", \" \\\"CpusetMems\\\": \\\"\\\",\", \" \\\"Devices\\\": [],\", \" \\\"DiskQuota\\\": 0,\", \" \\\"KernelMemory\\\": 0,\", \" \\\"MemoryReservation\\\": 0,\", \" \\\"MemorySwap\\\": 6442450944,\", \" \\\"MemorySwappiness\\\": -1,\", \" \\\"OomKillDisable\\\": false,\", \" \\\"PidsLimit\\\": 0,\", \" \\\"Ulimits\\\": null,\", \" \\\"CpuCount\\\": 0,\", \" \\\"CpuPercent\\\": 0,\", \" \\\"IOMaximumIOps\\\": 0,\", \" \\\"IOMaximumBandwidth\\\": 0\", \" },\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f-init/diff:/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff:/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/work\\\"\", \" }\", \" },\", \" \\\"Mounts\\\": [\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/ceph\\\",\", \" \\\"Destination\\\": \\\"/etc/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/run/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/run/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/localtime\\\",\", \" \\\"Destination\\\": \\\"/etc/localtime\\\",\", \" \\\"Mode\\\": \\\"ro\\\",\", \" \\\"RW\\\": false,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" }\", \" ],\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"controller-0\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": true,\", \" \\\"AttachStderr\\\": true,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"IP_VERSION=4\\\",\", \" \\\"MON_IP=172.17.3.15\\\",\", \" \\\"CLUSTER=ceph\\\",\", \" \\\"FSID=4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\", \" \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\", \" \\\"CEPH_DAEMON=MON\\\",\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-12\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": null,\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"NetworkSettings\\\": {\", \" \\\"Bridge\\\": \\\"\\\",\", \" \\\"SandboxID\\\": \\\"88005597e5b8601dd06c206a599504f9e06151150e681e9896950ce1dc0e8570\\\",\", \" \\\"HairpinMode\\\": false,\", \" \\\"LinkLocalIPv6Address\\\": \\\"\\\",\", \" \\\"LinkLocalIPv6PrefixLen\\\": 0,\", \" \\\"Ports\\\": {},\", \" \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\", \" \\\"SecondaryIPAddresses\\\": null,\", \" \\\"SecondaryIPv6Addresses\\\": null,\", \" \\\"EndpointID\\\": \\\"\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"MacAddress\\\": \\\"\\\",\", \" \\\"Networks\\\": {\", \" \\\"host\\\": {\", \" \\\"IPAMConfig\\\": null,\", \" \\\"Links\\\": null,\", \" \\\"Aliases\\\": null,\", \" \\\"NetworkID\\\": \\\"5126de8d808d5c5d8a90d1e72a006d96449de4809ed996069fb1f3b5e4bb5f68\\\",\", \" \\\"EndpointID\\\": \\\"fa6cc8203a497c959078fa65db5e9c6f93592bae4497628b9f488f99f597c39a\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"MacAddress\\\": \\\"\\\"\", \" }\", \" }\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.296) 0:01:27.447 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.056) 0:01:27.503 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.052) 0:01:27.556 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.050) 0:01:27.607 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.057) 0:01:27.664 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nTuesday 02 October 2018 08:40:14 -0400 (0:00:00.052) 0:01:27.717 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.053) 0:01:27.770 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\"], \"delta\": \"0:00:00.026283\", \"end\": \"2018-10-02 12:40:15.239872\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:15.213589\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.284) 0:01:28.055 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.049) 0:01:28.105 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.048) 0:01:28.154 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.047) 0:01:28.202 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.053) 0:01:28.256 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.051) 0:01:28.307 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.049) 0:01:28.357 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mon_image_repodigest_before_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.087) 0:01:28.444 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.046) 0:01:28.490 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.048) 0:01:28.539 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.056) 0:01:28.595 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.053) 0:01:28.648 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nTuesday 02 October 2018 08:40:15 -0400 (0:00:00.050) 0:01:28.699 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nTuesday 02 October 2018 08:40:16 -0400 (0:00:00.050) 0:01:28.750 ******* \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.034272\", \"end\": \"2018-10-02 12:40:16.223576\", \"rc\": 0, \"start\": \"2018-10-02 12:40:16.189304\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Image is up to date for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Image is up to date for 192.168.24.1:8787/rhceph:3-12\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nTuesday 02 October 2018 08:40:16 -0400 (0:00:00.282) 0:01:29.032 ******* \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.024632\", \"end\": \"2018-10-02 12:40:16.594993\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:16.570361\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nTuesday 02 October 2018 08:40:16 -0400 (0:00:00.376) 0:01:29.409 ******* \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nTuesday 02 October 2018 08:40:16 -0400 (0:00:00.187) 0:01:29.596 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nTuesday 02 October 2018 08:40:16 -0400 (0:00:00.057) 0:01:29.654 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nTuesday 02 October 2018 08:40:16 -0400 (0:00:00.050) 0:01:29.705 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nTuesday 02 October 2018 08:40:17 -0400 (0:00:00.058) 0:01:29.764 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nTuesday 02 October 2018 08:40:17 -0400 (0:00:00.051) 0:01:29.815 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nTuesday 02 October 2018 08:40:17 -0400 (0:00:00.052) 0:01:29.867 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nTuesday 02 October 2018 08:40:17 -0400 (0:00:00.047) 0:01:29.914 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nTuesday 02 October 2018 08:40:17 -0400 (0:00:00.054) 0:01:29.969 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nTuesday 02 October 2018 08:40:17 -0400 (0:00:00.052) 0:01:30.021 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nTuesday 02 October 2018 08:40:17 -0400 (0:00:00.048) 0:01:30.070 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nTuesday 02 October 2018 08:40:17 -0400 (0:00:00.048) 0:01:30.118 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nTuesday 02 October 2018 08:40:17 -0400 (0:00:00.049) 0:01:30.168 ******* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.421396\", \"end\": \"2018-10-02 12:40:18.131668\", \"rc\": 0, \"start\": \"2018-10-02 12:40:17.710272\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nTuesday 02 October 2018 08:40:18 -0400 (0:00:00.769) 0:01:30.937 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nTuesday 02 October 2018 08:40:18 -0400 (0:00:00.249) 0:01:31.187 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nTuesday 02 October 2018 08:40:18 -0400 (0:00:00.052) 0:01:31.239 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nTuesday 02 October 2018 08:40:18 -0400 (0:00:00.048) 0:01:31.288 ******* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nTuesday 02 October 2018 08:40:18 -0400 (0:00:00.083) 0:01:31.371 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nTuesday 02 October 2018 08:40:18 -0400 (0:00:00.051) 0:01:31.423 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nTuesday 02 October 2018 08:40:18 -0400 (0:00:00.058) 0:01:31.482 ******* \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nTuesday 02 October 2018 08:40:19 -0400 (0:00:00.957) 0:01:32.439 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nTuesday 02 October 2018 08:40:19 -0400 (0:00:00.056) 0:01:32.496 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nTuesday 02 October 2018 08:40:19 -0400 (0:00:00.060) 0:01:32.556 ******* \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nTuesday 02 October 2018 08:40:20 -0400 (0:00:00.217) 0:01:32.773 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nTuesday 02 October 2018 08:40:20 -0400 (0:00:00.057) 0:01:32.830 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nTuesday 02 October 2018 08:40:20 -0400 (0:00:00.055) 0:01:32.886 ******* \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nTuesday 02 October 2018 08:40:20 -0400 (0:00:00.253) 0:01:33.139 ******* \nok: [controller-0] => {\"changed\": false, \"checksum\": \"d7acef6abeb4e7853e1cf2b7e41f2f58868cad4a\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"a31e326b2b79369b2901aa2d0f318a37\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484020.44-8431852936027/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nTuesday 02 October 2018 08:40:20 -0400 (0:00:00.580) 0:01:33.719 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:2\nTuesday 02 October 2018 08:40:21 -0400 (0:00:00.052) 0:01:33.772 ******* \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd_mgr\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mgr : create mgr directory] *****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:2\nTuesday 02 October 2018 08:40:21 -0400 (0:00:00.124) 0:01:33.896 ******* \nok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10\nTuesday 02 October 2018 08:40:21 -0400 (0:00:00.254) 0:01:34.150 ******* \nchanged: [controller-0] => (item={u'dest': u'/var/lib/ceph/mgr/ceph-controller-0/keyring', u'name': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"name\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"md5sum\": \"91380060d243fe3cf688ad21a60a8ace\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484021.46-178482665226417/source\", \"state\": \"file\", \"uid\": 167}\nskipping: [controller-0] => (item={u'dest': u'/etc/ceph/ceph.client.admin.keyring', u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"dest\": \"/etc/ceph/ceph.client.admin.keyring\", \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : set mgr key permissions] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:24\nTuesday 02 October 2018 08:40:21 -0400 (0:00:00.564) 0:01:34.714 ******* \nok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"state\": \"file\", \"uid\": 167}\n\nTASK [ceph-mgr : install ceph-mgr package on RedHat or SUSE] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2\nTuesday 02 October 2018 08:40:22 -0400 (0:00:00.250) 0:01:34.965 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : install ceph mgr for debian] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:9\nTuesday 02 October 2018 08:40:22 -0400 (0:00:00.054) 0:01:35.020 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:17\nTuesday 02 October 2018 08:40:22 -0400 (0:00:00.052) 0:01:35.072 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : add ceph-mgr systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:25\nTuesday 02 October 2018 08:40:22 -0400 (0:00:00.051) 0:01:35.124 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : start and add that the mgr service to the init sequence] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:35\nTuesday 02 October 2018 08:40:22 -0400 (0:00:00.050) 0:01:35.174 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2\nTuesday 02 October 2018 08:40:22 -0400 (0:00:00.049) 0:01:35.224 ******* \nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"168504b73edc17939666d0ef559eaab44f0382c8\", \"dest\": \"/etc/systemd/system/ceph-mgr@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"35d5093713655bbf808450ce1bb2b512\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 734, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484022.52-112121441174884/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mgr : systemd start mgr container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:13\nTuesday 02 October 2018 08:40:23 -0400 (0:00:00.851) 0:01:36.075 ******* \nchanged: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mgr@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dmgr.slice systemd-journald.socket docker.service basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Manager\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro -e CLUSTER=ceph -e CEPH_DAEMON=MGR -e MGR_DASHBOARD=0 --name=ceph-mgr-controller-0 192.168.24.1:8787/rhceph:3-12 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mgr@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mgr@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127792\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127792\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mgr@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmgr.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmgr.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mgr : get enabled modules from ceph-mgr] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:19\nTuesday 02 October 2018 08:40:23 -0400 (0:00:00.529) 0:01:36.605 ******* \nchanged: [controller-0 -> 192.168.24.10] => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"--format\", \"json\", \"mgr\", \"module\", \"ls\"], \"delta\": \"0:00:00.385691\", \"end\": \"2018-10-02 12:40:24.459499\", \"rc\": 0, \"start\": \"2018-10-02 12:40:24.073808\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\", \"stdout_lines\": [\"\", \"{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\"]}\n\nTASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:26\nTuesday 02 October 2018 08:40:24 -0400 (0:00:00.655) 0:01:37.260 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_ceph_mgr_modules\": {\"disabled_modules\": [\"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"selftest\", \"zabbix\"], \"enabled_modules\": [\"balancer\", \"restful\", \"status\"]}}, \"changed\": false}\n\nTASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:32\nTuesday 02 October 2018 08:40:24 -0400 (0:00:00.086) 0:01:37.347 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_disabled_ceph_mgr_modules\": [\"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"selftest\", \"zabbix\"]}, \"changed\": false}\n\nTASK [ceph-mgr : disable ceph mgr enabled modules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:38\nTuesday 02 October 2018 08:40:24 -0400 (0:00:00.119) 0:01:37.467 ******* \nchanged: [controller-0 -> 192.168.24.10] => (item=balancer) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"balancer\"], \"delta\": \"0:00:01.212066\", \"end\": \"2018-10-02 12:40:26.244390\", \"item\": \"balancer\", \"rc\": 0, \"start\": \"2018-10-02 12:40:25.032324\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [controller-0 -> 192.168.24.10] => (item=restful) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"restful\"], \"delta\": \"0:00:00.810150\", \"end\": \"2018-10-02 12:40:27.236623\", \"item\": \"restful\", \"rc\": 0, \"start\": \"2018-10-02 12:40:26.426473\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nskipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : add modules to ceph-mgr] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:49\nTuesday 02 October 2018 08:40:27 -0400 (0:00:02.604) 0:01:40.072 ******* \nskipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nTuesday 02 October 2018 08:40:27 -0400 (0:00:00.030) 0:01:40.103 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nTuesday 02 October 2018 08:40:27 -0400 (0:00:00.169) 0:01:40.272 ******* \nok: [controller-0] => {\"changed\": false, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"mode\": \"0750\", \"owner\": \"root\", \"path\": \"/tmp/restart_mgr_daemon.sh\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nTuesday 02 October 2018 08:40:28 -0400 (0:00:00.566) 0:01:40.839 ******* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nTuesday 02 October 2018 08:40:28 -0400 (0:00:00.172) 0:01:41.012 ******* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nTuesday 02 October 2018 08:40:28 -0400 (0:00:00.135) 0:01:41.147 ******* \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph manager install 'Complete'] *************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:129\nTuesday 02 October 2018 08:40:28 -0400 (0:00:00.103) 0:01:41.251 ******* \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"end\": \"20181002084028Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY [osds] ********************************************************************\n\nTASK [set ceph osd install 'In Progress'] **************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:141\nTuesday 02 October 2018 08:40:28 -0400 (0:00:00.166) 0:01:41.418 ******* \nok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"start\": \"20181002084028Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nTuesday 02 October 2018 08:40:28 -0400 (0:00:00.080) 0:01:41.499 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nTuesday 02 October 2018 08:40:28 -0400 (0:00:00.045) 0:01:41.545 ******* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-osd-ceph-0\"], \"delta\": \"0:00:00.028737\", \"end\": \"2018-10-02 12:40:29.013252\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:28.984515\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.263) 0:01:41.808 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.048) 0:01:41.857 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.043) 0:01:41.900 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.047) 0:01:41.948 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.048) 0:01:41.996 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.046) 0:01:42.043 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.040) 0:01:42.084 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.039) 0:01:42.123 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.040) 0:01:42.164 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.041) 0:01:42.205 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.040) 0:01:42.246 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.043) 0:01:42.290 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.047) 0:01:42.337 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.048) 0:01:42.385 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.048) 0:01:42.434 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.046) 0:01:42.480 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.051) 0:01:42.532 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.046) 0:01:42.579 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.056) 0:01:42.635 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.048) 0:01:42.684 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nTuesday 02 October 2018 08:40:29 -0400 (0:00:00.045) 0:01:42.730 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.044) 0:01:42.774 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.045) 0:01:42.819 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.045) 0:01:42.865 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.053) 0:01:42.918 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.046) 0:01:42.965 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.046) 0:01:43.011 ******* \nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.227) 0:01:43.239 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.070) 0:01:43.310 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.084) 0:01:43.394 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.073) 0:01:43.467 ******* \nok: [ceph-0 -> 192.168.24.10] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nTuesday 02 October 2018 08:40:30 -0400 (0:00:00.146) 0:01:43.614 ******* \nok: [ceph-0 -> 192.168.24.10] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.336806\", \"end\": \"2018-10-02 12:40:31.399288\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:31.062482\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.15:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.15:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nTuesday 02 October 2018 08:40:31 -0400 (0:00:00.588) 0:01:44.202 ******* \nok: [ceph-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nTuesday 02 October 2018 08:40:31 -0400 (0:00:00.188) 0:01:44.391 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nTuesday 02 October 2018 08:40:31 -0400 (0:00:00.049) 0:01:44.441 ******* \nok: [ceph-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nTuesday 02 October 2018 08:40:31 -0400 (0:00:00.183) 0:01:44.624 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"172.17.3.15:6800/79\", \"active_gid\": 4104, \"active_name\": \"controller-0\", \"available\": true, \"available_modules\": [\"balancer\", \"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"restful\", \"selftest\", \"status\", \"zabbix\"], \"epoch\": 7, \"modules\": [\"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-10-02 12:39:39.460029\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"modified\": \"2018-10-02 12:39:39.460029\", \"mons\": [{\"addr\": \"172.17.3.15:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.15:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 1, \"full\": false, \"nearfull\": false, \"num_in_osds\": 0, \"num_osds\": 0, \"num_remapped_pgs\": 0, \"num_up_osds\": 0}}, \"pgmap\": {\"bytes_avail\": 0, \"bytes_total\": 0, \"bytes_used\": 0, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 0, \"num_pools\": 0, \"pgs_by_state\": []}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nTuesday 02 October 2018 08:40:31 -0400 (0:00:00.083) 0:01:44.707 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.076) 0:01:44.784 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.077) 0:01:44.861 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.047) 0:01:44.909 ******* \nok: [ceph-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 4398e5b0-c63c-11e8-b95a-525400c8bd81 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.190) 0:01:45.100 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.043) 0:01:45.144 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.045) 0:01:45.189 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"mds_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.176) 0:01:45.365 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.040) 0:01:45.406 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.173) 0:01:45.580 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.074) 0:01:45.654 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163\nTuesday 02 October 2018 08:40:32 -0400 (0:00:00.072) 0:01:45.727 ******* \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.004040\", \"end\": \"2018-10-02 12:40:33.304661\", \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.300621\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}\nok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.002955\", \"end\": \"2018-10-02 12:40:33.485602\", \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.482647\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}\nok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.002920\", \"end\": \"2018-10-02 12:40:33.658559\", \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.655639\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}\nok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.002832\", \"end\": \"2018-10-02 12:40:33.822927\", \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.820095\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}\nok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.002898\", \"end\": \"2018-10-02 12:40:33.985666\", \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.982768\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173\nTuesday 02 October 2018 08:40:34 -0400 (0:00:01.055) 0:01:46.782 ******* \nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-10-02 12:40:33.304661', '_ansible_no_log': False, u'stdout': u'/dev/vdb', u'cmd': [u'readlink', u'-f', u'/dev/vdb'], u'rc': 0, 'item': u'/dev/vdb', u'delta': u'0:00:00.004040', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdb', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdb'], u'start': u'2018-10-02 12:40:33.300621', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.004040\", \"end\": \"2018-10-02 12:40:33.304661\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.300621\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-10-02 12:40:33.485602', '_ansible_no_log': False, u'stdout': u'/dev/vdc', u'cmd': [u'readlink', u'-f', u'/dev/vdc'], u'rc': 0, 'item': u'/dev/vdc', u'delta': u'0:00:00.002955', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdc', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdc'], u'start': u'2018-10-02 12:40:33.482647', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.002955\", \"end\": \"2018-10-02 12:40:33.485602\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.482647\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-10-02 12:40:33.658559', '_ansible_no_log': False, u'stdout': u'/dev/vdd', u'cmd': [u'readlink', u'-f', u'/dev/vdd'], u'rc': 0, 'item': u'/dev/vdd', u'delta': u'0:00:00.002920', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdd', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdd'], u'start': u'2018-10-02 12:40:33.655639', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.002920\", \"end\": \"2018-10-02 12:40:33.658559\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.655639\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-10-02 12:40:33.822927', '_ansible_no_log': False, u'stdout': u'/dev/vde', u'cmd': [u'readlink', u'-f', u'/dev/vde'], u'rc': 0, 'item': u'/dev/vde', u'delta': u'0:00:00.002832', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vde', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vde'], u'start': u'2018-10-02 12:40:33.820095', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.002832\", \"end\": \"2018-10-02 12:40:33.822927\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.820095\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-10-02 12:40:33.985666', '_ansible_no_log': False, u'stdout': u'/dev/vdf', u'cmd': [u'readlink', u'-f', u'/dev/vdf'], u'rc': 0, 'item': u'/dev/vdf', u'delta': u'0:00:00.002898', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdf', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdf'], u'start': u'2018-10-02 12:40:33.982768', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.002898\", \"end\": \"2018-10-02 12:40:33.985666\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.982768\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182\nTuesday 02 October 2018 08:40:34 -0400 (0:00:00.289) 0:01:47.072 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nTuesday 02 October 2018 08:40:34 -0400 (0:00:00.208) 0:01:47.280 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nTuesday 02 October 2018 08:40:34 -0400 (0:00:00.049) 0:01:47.330 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nTuesday 02 October 2018 08:40:34 -0400 (0:00:00.049) 0:01:47.380 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nTuesday 02 October 2018 08:40:34 -0400 (0:00:00.047) 0:01:47.428 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218\nTuesday 02 October 2018 08:40:34 -0400 (0:00:00.049) 0:01:47.477 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rgw_hostname] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225\nTuesday 02 October 2018 08:40:34 -0400 (0:00:00.183) 0:01:47.661 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nTuesday 02 October 2018 08:40:35 -0400 (0:00:00.137) 0:01:47.798 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nTuesday 02 October 2018 08:40:35 -0400 (0:00:00.069) 0:01:47.868 ******* \nchanged: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nTuesday 02 October 2018 08:40:37 -0400 (0:00:02.038) 0:01:49.906 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nTuesday 02 October 2018 08:40:37 -0400 (0:00:00.047) 0:01:49.954 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nTuesday 02 October 2018 08:40:37 -0400 (0:00:00.046) 0:01:50.000 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nTuesday 02 October 2018 08:40:37 -0400 (0:00:00.045) 0:01:50.046 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nTuesday 02 October 2018 08:40:37 -0400 (0:00:00.046) 0:01:50.092 ******* \nok: [ceph-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [ceph-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nTuesday 02 October 2018 08:40:37 -0400 (0:00:00.376) 0:01:50.468 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nTuesday 02 October 2018 08:40:37 -0400 (0:00:00.077) 0:01:50.546 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nTuesday 02 October 2018 08:40:37 -0400 (0:00:00.042) 0:01:50.588 ******* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.021809\", \"end\": \"2018-10-02 12:40:38.017706\", \"rc\": 0, \"start\": \"2018-10-02 12:40:37.995897\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 8633870/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 8633870/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nTuesday 02 October 2018 08:40:38 -0400 (0:00:00.222) 0:01:50.810 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nTuesday 02 October 2018 08:40:38 -0400 (0:00:00.078) 0:01:50.888 ******* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-ceph-0\"], \"delta\": \"0:00:00.023339\", \"end\": \"2018-10-02 12:40:38.346811\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:38.323472\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nTuesday 02 October 2018 08:40:38 -0400 (0:00:00.256) 0:01:51.145 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nTuesday 02 October 2018 08:40:38 -0400 (0:00:00.097) 0:01:51.243 ******* \nok: [ceph-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nTuesday 02 October 2018 08:40:38 -0400 (0:00:00.157) 0:01:51.400 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nTuesday 02 October 2018 08:40:38 -0400 (0:00:00.096) 0:01:51.497 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nTuesday 02 October 2018 08:40:38 -0400 (0:00:00.104) 0:01:51.601 ******* \nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1538483996.1513722, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"d677a326bd647888546790f10e2cedd45b16b16c\", \"ctime\": 1538483996.1513722, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382517, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.1513722, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1538483996.3323712, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"55ce938694f0ed88cb9c4903bdb60b986ace7379\", \"ctime\": 1538483996.3323712, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382519, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.3323712, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1538483996.5133705, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f28d2d0af61547531ab0fa31ff23aca020f498eb\", \"ctime\": 1538483996.5133705, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 77181311, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.5133705, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1538483996.6923697, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"4ad6235f1694fb6b72596dffe07b7a3347c382b4\", \"ctime\": 1538483996.6923697, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 80259623, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.6923697, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1538483996.870369, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"4d16e08847d6079bcd8caa2adf07e9012cb0f41e\", \"ctime\": 1538483996.870369, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 84314226, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.870369, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1538483997.047368, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"5255ad2e079bcf92a5703629e8cbeb93fa79b47a\", \"ctime\": 1538483997.047368, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 89100520, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483997.047368, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1538484021.4992602, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"ctime\": 1538483998.7743604, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382521, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483998.7743604, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nTuesday 02 October 2018 08:40:40 -0400 (0:00:01.380) 0:01:52.981 ******* \nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483996.1513722, u'block_size': 4096, u'inode': 59382517, u'isgid': False, u'size': 159, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring', u'xusr': False, u'atime': 1538483996.1513722, u'mimetype': u'unknown', u'ctime': 1538483996.1513722, u'isblk': False, u'checksum': u'd677a326bd647888546790f10e2cedd45b16b16c', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1538483996.1513722, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"d677a326bd647888546790f10e2cedd45b16b16c\", \"ctime\": 1538483996.1513722, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382517, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.1513722, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483996.3323712, u'block_size': 4096, u'inode': 59382519, u'isgid': False, u'size': 688, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring', u'xusr': False, u'atime': 1538483996.3323712, u'mimetype': u'unknown', u'ctime': 1538483996.3323712, u'isblk': False, u'checksum': u'55ce938694f0ed88cb9c4903bdb60b986ace7379', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1538483996.3323712, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"55ce938694f0ed88cb9c4903bdb60b986ace7379\", \"ctime\": 1538483996.3323712, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382519, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.3323712, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483996.5133705, u'block_size': 4096, u'inode': 77181311, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring', u'xusr': False, u'atime': 1538483996.5133705, u'mimetype': u'unknown', u'ctime': 1538483996.5133705, u'isblk': False, u'checksum': u'f28d2d0af61547531ab0fa31ff23aca020f498eb', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1538483996.5133705, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f28d2d0af61547531ab0fa31ff23aca020f498eb\", \"ctime\": 1538483996.5133705, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 77181311, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.5133705, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483996.6923697, u'block_size': 4096, u'inode': 80259623, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'xusr': False, u'atime': 1538483996.6923697, u'mimetype': u'unknown', u'ctime': 1538483996.6923697, u'isblk': False, u'checksum': u'4ad6235f1694fb6b72596dffe07b7a3347c382b4', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1538483996.6923697, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"4ad6235f1694fb6b72596dffe07b7a3347c382b4\", \"ctime\": 1538483996.6923697, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 80259623, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.6923697, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483996.870369, u'block_size': 4096, u'inode': 84314226, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring', u'xusr': False, u'atime': 1538483996.870369, u'mimetype': u'unknown', u'ctime': 1538483996.870369, u'isblk': False, u'checksum': u'4d16e08847d6079bcd8caa2adf07e9012cb0f41e', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1538483996.870369, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"4d16e08847d6079bcd8caa2adf07e9012cb0f41e\", \"ctime\": 1538483996.870369, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 84314226, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.870369, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483997.047368, u'block_size': 4096, u'inode': 89100520, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'xusr': False, u'atime': 1538483997.047368, u'mimetype': u'unknown', u'ctime': 1538483997.047368, u'isblk': False, u'checksum': u'5255ad2e079bcf92a5703629e8cbeb93fa79b47a', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1538483997.047368, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"5255ad2e079bcf92a5703629e8cbeb93fa79b47a\", \"ctime\": 1538483997.047368, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 89100520, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483997.047368, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483998.7743604, u'block_size': 4096, u'inode': 59382521, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1538484021.4992602, u'mimetype': u'unknown', u'ctime': 1538483998.7743604, u'isblk': False, u'checksum': u'8bb7be95a8da65439da12aedf5f2fdd1235025df', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1538484021.4992602, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"ctime\": 1538483998.7743604, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382521, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483998.7743604, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nTuesday 02 October 2018 08:40:40 -0400 (0:00:00.342) 0:01:53.323 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nTuesday 02 October 2018 08:40:40 -0400 (0:00:00.042) 0:01:53.365 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nTuesday 02 October 2018 08:40:40 -0400 (0:00:00.043) 0:01:53.409 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nTuesday 02 October 2018 08:40:40 -0400 (0:00:00.044) 0:01:53.453 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nTuesday 02 October 2018 08:40:40 -0400 (0:00:00.045) 0:01:53.499 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nTuesday 02 October 2018 08:40:40 -0400 (0:00:00.045) 0:01:53.544 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nTuesday 02 October 2018 08:40:40 -0400 (0:00:00.043) 0:01:53.588 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nTuesday 02 October 2018 08:40:40 -0400 (0:00:00.050) 0:01:53.639 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nTuesday 02 October 2018 08:40:40 -0400 (0:00:00.042) 0:01:53.681 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nTuesday 02 October 2018 08:40:40 -0400 (0:00:00.042) 0:01:53.723 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.048) 0:01:53.772 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.042) 0:01:53.814 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.042) 0:01:53.857 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.050) 0:01:53.908 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.043) 0:01:53.951 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.042) 0:01:53.994 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.046) 0:01:54.041 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.044) 0:01:54.085 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.042) 0:01:54.127 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.047) 0:01:54.175 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.044) 0:01:54.219 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.043) 0:01:54.263 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.043) 0:01:54.306 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.043) 0:01:54.349 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.046) 0:01:54.396 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.047) 0:01:54.443 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.044) 0:01:54.488 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.042) 0:01:54.531 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.044) 0:01:54.576 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nTuesday 02 October 2018 08:40:41 -0400 (0:00:00.046) 0:01:54.622 ******* \nok: [ceph-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:13.105539\", \"end\": \"2018-10-02 12:40:55.168399\", \"rc\": 0, \"start\": \"2018-10-02 12:40:42.062860\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Verifying Checksum\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Verifying Checksum\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Verifying Checksum\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Verifying Checksum\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Verifying Checksum\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Verifying Checksum\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nTuesday 02 October 2018 08:40:55 -0400 (0:00:13.347) 0:02:07.970 ******* \nchanged: [ceph-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.025743\", \"end\": \"2018-10-02 12:40:55.425542\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:55.399799\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1487bf057dc6ee0e44030b9fda5febe23f8daf3d246e0762b1ec85ae495261ed/diff:/var/lib/docker/overlay2/172b14eff060835530b211895b7380ac50933aecf7a81f4d0bfe61b55da6fd8a/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1487bf057dc6ee0e44030b9fda5febe23f8daf3d246e0762b1ec85ae495261ed/diff:/var/lib/docker/overlay2/172b14eff060835530b211895b7380ac50933aecf7a81f4d0bfe61b55da6fd8a/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.261) 0:02:08.231 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.078) 0:02:08.310 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.045) 0:02:08.356 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.047) 0:02:08.404 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.044) 0:02:08.448 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.045) 0:02:08.494 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.045) 0:02:08.539 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.045) 0:02:08.585 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.051) 0:02:08.636 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.043) 0:02:08.680 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nTuesday 02 October 2018 08:40:55 -0400 (0:00:00.042) 0:02:08.723 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nTuesday 02 October 2018 08:40:56 -0400 (0:00:00.044) 0:02:08.767 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nTuesday 02 October 2018 08:40:56 -0400 (0:00:00.042) 0:02:08.810 ******* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.476108\", \"end\": \"2018-10-02 12:40:56.715032\", \"rc\": 0, \"start\": \"2018-10-02 12:40:56.238924\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nTuesday 02 October 2018 08:40:56 -0400 (0:00:00.706) 0:02:09.516 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nTuesday 02 October 2018 08:40:56 -0400 (0:00:00.196) 0:02:09.712 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nTuesday 02 October 2018 08:40:57 -0400 (0:00:00.046) 0:02:09.759 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nTuesday 02 October 2018 08:40:57 -0400 (0:00:00.045) 0:02:09.804 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nTuesday 02 October 2018 08:40:57 -0400 (0:00:00.178) 0:02:09.982 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nTuesday 02 October 2018 08:40:57 -0400 (0:00:00.045) 0:02:10.028 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nTuesday 02 October 2018 08:40:57 -0400 (0:00:00.054) 0:02:10.082 ******* \nchanged: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nTuesday 02 October 2018 08:40:58 -0400 (0:00:00.994) 0:02:11.077 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nTuesday 02 October 2018 08:40:58 -0400 (0:00:00.049) 0:02:11.127 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nTuesday 02 October 2018 08:40:58 -0400 (0:00:00.053) 0:02:11.180 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nTuesday 02 October 2018 08:40:58 -0400 (0:00:00.173) 0:02:11.354 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nTuesday 02 October 2018 08:40:58 -0400 (0:00:00.052) 0:02:11.406 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nTuesday 02 October 2018 08:40:58 -0400 (0:00:00.046) 0:02:11.453 ******* \nchanged: [ceph-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nTuesday 02 October 2018 08:40:58 -0400 (0:00:00.242) 0:02:11.695 ******* \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for ceph-0\nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"2f883daf3398fbd093f10bbdbf556328ece3203e\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"3cdc9cc79dae4f2e11edf0a447f9356d\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1213, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484059.01-13497339424245/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nTuesday 02 October 2018 08:41:01 -0400 (0:00:02.137) 0:02:13.833 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure public_network configured] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:2\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.074) 0:02:13.908 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure cluster_network configured] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:8\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.052) 0:02:13.960 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure journal_size configured] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:15\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.054) 0:02:14.015 ******* \nok: [ceph-0] => {\n \"msg\": \"WARNING: journal_size is configured to 512, which is less than 5GB. This is not recommended and can lead to severe issues.\"\n}\n\nTASK [ceph-osd : make sure an osd scenario was chosen] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:23\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.094) 0:02:14.110 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure a valid osd scenario was chosen] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:31\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.049) 0:02:14.159 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify devices have been provided] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:39\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.053) 0:02:14.213 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : check if osd_scenario lvm is supported by the selected ceph version] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:49\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.066) 0:02:14.279 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify lvm_volumes have been provided] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:59\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.049) 0:02:14.328 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the lvm_volumes variable is a list] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:69\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.055) 0:02:14.384 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the devices variable is a list] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:79\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.052) 0:02:14.437 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify dedicated devices have been provided] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:88\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.050) 0:02:14.487 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the dedicated_devices variable is a list] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:98\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.049) 0:02:14.537 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : check if bluestore is supported by the selected ceph version] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:109\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.051) 0:02:14.588 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include system_tuning.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:5\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.048) 0:02:14.637 ******* \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml for ceph-0\n\nTASK [ceph-osd : disable osd directory parsing by updatedb] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:2\nTuesday 02 October 2018 08:41:01 -0400 (0:00:00.077) 0:02:14.714 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : disable osd directory path in updatedb.conf] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:11\nTuesday 02 October 2018 08:41:02 -0400 (0:00:00.047) 0:02:14.762 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : create tmpfiles.d directory] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:22\nTuesday 02 October 2018 08:41:02 -0400 (0:00:00.056) 0:02:14.818 ******* \nok: [ceph-0] => {\"changed\": false, \"gid\": 0, \"group\": \"root\", \"mode\": \"0755\", \"owner\": \"root\", \"path\": \"/etc/tmpfiles.d\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 0}\n\nTASK [ceph-osd : disable transparent hugepage] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:33\nTuesday 02 October 2018 08:41:02 -0400 (0:00:00.343) 0:02:15.162 ******* \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"e000059a4cfd8ce350b13f14305a46eaf99849ba\", \"dest\": \"/etc/tmpfiles.d/ceph_transparent_hugepage.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"21ac872f3aa1fb44b01d4f7ab00a35fc\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 158, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484062.57-141789700165393/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : get default vm.min_free_kbytes] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:45\nTuesday 02 October 2018 08:41:03 -0400 (0:00:00.632) 0:02:15.794 ******* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"sysctl\", \"-b\", \"vm.min_free_kbytes\"], \"delta\": \"0:00:00.004662\", \"end\": \"2018-10-02 12:41:03.353659\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:41:03.348997\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"67584\", \"stdout_lines\": [\"67584\"]}\n\nTASK [ceph-osd : set_fact vm_min_free_kbytes] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:52\nTuesday 02 October 2018 08:41:03 -0400 (0:00:00.355) 0:02:16.150 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"vm_min_free_kbytes\": \"67584\"}, \"changed\": false}\n\nTASK [ceph-osd : apply operating system tuning] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:56\nTuesday 02 October 2018 08:41:03 -0400 (0:00:00.185) 0:02:16.335 ******* \nchanged: [ceph-0] => (item={u'enable': u\"(osd_objectstore == 'bluestore')\", u'name': u'fs.aio-max-nr', u'value': u'1048576'}) => {\"changed\": true, \"item\": {\"enable\": \"(osd_objectstore == 'bluestore')\", \"name\": \"fs.aio-max-nr\", \"value\": \"1048576\"}}\nchanged: [ceph-0] => (item={u'name': u'fs.file-max', u'value': 26234859}) => {\"changed\": true, \"item\": {\"name\": \"fs.file-max\", \"value\": 26234859}}\nchanged: [ceph-0] => (item={u'name': u'vm.zone_reclaim_mode', u'value': 0}) => {\"changed\": true, \"item\": {\"name\": \"vm.zone_reclaim_mode\", \"value\": 0}}\nchanged: [ceph-0] => (item={u'name': u'vm.swappiness', u'value': 10}) => {\"changed\": true, \"item\": {\"name\": \"vm.swappiness\", \"value\": 10}}\nchanged: [ceph-0] => (item={u'name': u'vm.min_free_kbytes', u'value': u'67584'}) => {\"changed\": true, \"item\": {\"name\": \"vm.min_free_kbytes\", \"value\": \"67584\"}}\n\nTASK [ceph-osd : install dependencies] *****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:10\nTuesday 02 October 2018 08:41:04 -0400 (0:00:01.209) 0:02:17.544 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include common.yml] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:18\nTuesday 02 October 2018 08:41:04 -0400 (0:00:00.139) 0:02:17.684 ******* \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml for ceph-0\n\nTASK [ceph-osd : create bootstrap-osd and osd directories] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:2\nTuesday 02 October 2018 08:41:05 -0400 (0:00:00.088) 0:02:17.772 ******* \nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nok: [ceph-0] => (item=/var/lib/ceph/osd/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-osd : copy ceph key(s) if needed] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:15\nTuesday 02 October 2018 08:41:05 -0400 (0:00:00.398) 0:02:18.171 ******* \nchanged: [ceph-0] => (item={u'name': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"f28d2d0af61547531ab0fa31ff23aca020f498eb\", \"dest\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"name\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\"}, \"md5sum\": \"096130d29629dd16899b5da08c7a169f\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 113, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484065.48-200308777357540/source\", \"state\": \"file\", \"uid\": 167}\nskipping: [ceph-0] => (item={u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:2\nTuesday 02 October 2018 08:41:05 -0400 (0:00:00.538) 0:02:18.710 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options 'ceph_disk_cli_options'] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:11\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.042) 0:02:18.752 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph'] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:20\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.052) 0:02:18.805 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore --dmcrypt'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:29\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.049) 0:02:18.855 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --filestore --dmcrypt'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:38\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.046) 0:02:18.901 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --dmcrypt'] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:47\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.048) 0:02:18.950 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e KV_TYPE=etcd -e KV_IP=127.0.0.1 -e KV_PORT=2379'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:56\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.048) 0:02:18.999 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:62\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.043) 0:02:19.042 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"docker_env_args\": \"-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0\"}, \"changed\": false}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=1'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:70\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.079) 0:02:19.122 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:78\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.049) 0:02:19.172 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=1'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:86\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.055) 0:02:19.227 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact devices generate device list when osd_auto_discovery] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:2\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.046) 0:02:19.273 ******* \nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'20971520', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {u'vda1': {u'sectorsize': 512, u'uuid': u'2018-10-02-08-22-43-00', u'links': {u'masters': [], u'labels': [u'config-2'], u'ids': [], u'uuids': [u'2018-10-02-08-22-43-00']}, u'sectors': u'2048', u'start': u'2048', u'holders': [], u'size': u'1.00 MB'}, u'vda2': {u'sectorsize': 512, u'uuid': u'fec224dd-43d4-4761-93fb-772f1b28103d', u'links': {u'masters': [], u'labels': [u'img-rootfs'], u'ids': [], u'uuids': [u'fec224dd-43d4-4761-93fb-772f1b28103d']}, u'sectors': u'20967391', u'start': u'4096', u'holders': [], u'size': u'10.00 GB'}}, u'holders': [], u'size': u'10.00 GB'}, 'key': u'vda'}) => {\"changed\": false, \"item\": {\"key\": \"vda\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {\"vda1\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"config-2\"], \"masters\": [], \"uuids\": [\"2018-10-02-08-22-43-00\"]}, \"sectors\": \"2048\", \"sectorsize\": 512, \"size\": \"1.00 MB\", \"start\": \"2048\", \"uuid\": \"2018-10-02-08-22-43-00\"}, \"vda2\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"img-rootfs\"], \"masters\": [], \"uuids\": [\"fec224dd-43d4-4761-93fb-772f1b28103d\"]}, \"sectors\": \"20967391\", \"sectorsize\": 512, \"size\": \"10.00 GB\", \"start\": \"4096\", \"uuid\": \"fec224dd-43d4-4761-93fb-772f1b28103d\"}}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"20971520\", \"sectorsize\": \"512\", \"size\": \"10.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdc'}) => {\"changed\": false, \"item\": {\"key\": \"vdc\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdb'}) => {\"changed\": false, \"item\": {\"key\": \"vdb\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vde'}) => {\"changed\": false, \"item\": {\"key\": \"vde\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdd'}) => {\"changed\": false, \"item\": {\"key\": \"vdd\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdf'}) => {\"changed\": false, \"item\": {\"key\": \"vdf\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : resolve dedicated device link(s)] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:15\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.099) 0:02:19.372 ******* \n\nTASK [ceph-osd : set_fact build dedicated_devices from resolved symlinks] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:24\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.042) 0:02:19.415 ******* \n\nTASK [ceph-osd : set_fact build final dedicated_devices list] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:32\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.043) 0:02:19.459 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : read information about the devices] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:29\nTuesday 02 October 2018 08:41:06 -0400 (0:00:00.044) 0:02:19.503 ******* \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\n\nTASK [ceph-osd : check the partition status of the osd disks] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:2\nTuesday 02 October 2018 08:41:07 -0400 (0:00:01.161) 0:02:20.664 ******* \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007504\", \"end\": \"2018-10-02 12:41:08.112642\", \"failed_when_result\": false, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.105138\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.006561\", \"end\": \"2018-10-02 12:41:08.275692\", \"failed_when_result\": false, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.269131\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006871\", \"end\": \"2018-10-02 12:41:08.427145\", \"failed_when_result\": false, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.420274\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.006604\", \"end\": \"2018-10-02 12:41:08.572767\", \"failed_when_result\": false, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.566163\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.012161\", \"end\": \"2018-10-02 12:41:08.727557\", \"failed_when_result\": false, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.715396\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create gpt disk label] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:11\nTuesday 02 October 2018 08:41:08 -0400 (0:00:00.861) 0:02:21.526 ******* \nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdb'], u'end': u'2018-10-02 12:41:08.112642', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdb', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdb', u'delta': u'0:00:00.007504', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:08.105138', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdb']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdb\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.012537\", \"end\": \"2018-10-02 12:41:08.971637\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007504\", \"end\": \"2018-10-02 12:41:08.112642\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.105138\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:08.959100\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdc'], u'end': u'2018-10-02 12:41:08.275692', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdc', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdc', u'delta': u'0:00:00.006561', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:08.269131', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdc']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdc\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.007439\", \"end\": \"2018-10-02 12:41:09.141079\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.006561\", \"end\": \"2018-10-02 12:41:08.275692\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.269131\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:09.133640\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdd'], u'end': u'2018-10-02 12:41:08.427145', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdd', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdd', u'delta': u'0:00:00.006871', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:08.420274', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdd']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdd\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.013353\", \"end\": \"2018-10-02 12:41:09.333743\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006871\", \"end\": \"2018-10-02 12:41:08.427145\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.420274\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:09.320390\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vde'], u'end': u'2018-10-02 12:41:08.572767', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vde', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vde', u'delta': u'0:00:00.006604', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:08.566163', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vde']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vde\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008080\", \"end\": \"2018-10-02 12:41:09.510180\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.006604\", \"end\": \"2018-10-02 12:41:08.572767\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.566163\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:09.502100\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdf'], u'end': u'2018-10-02 12:41:08.727557', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdf', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdf', u'delta': u'0:00:00.012161', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:08.715396', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdf']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdf\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.010214\", \"end\": \"2018-10-02 12:41:09.685000\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.012161\", \"end\": \"2018-10-02 12:41:08.727557\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.715396\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:09.674786\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : include scenarios/collocated.yml] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:41\nTuesday 02 October 2018 08:41:09 -0400 (0:00:00.969) 0:02:22.495 ******* \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml for ceph-0\n\nTASK [ceph-osd : prepare ceph containerized osd disk collocated] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5\nTuesday 02 October 2018 08:41:09 -0400 (0:00:00.091) 0:02:22.586 ******* \nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdb -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdb -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.943702\", \"end\": \"2018-10-02 12:41:16.985486\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:10.041784\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:10'\\n+common_functions.sh:13: log(): echo '2018-10-02 12:41:10 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid fe117fde-832c-4763-a5e3-451d4d10d6a6 /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:fe117fde-832c-4763-a5e3-451d4d10d6a6 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdb\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:91ed2c1d-609c-486f-a066-6419a5472482 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdb1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\\nmount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.pnZHZR with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.pnZHZR\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.pnZHZR\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.pnZHZR\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/ceph_fsid.19078.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/ceph_fsid.19078.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/fsid.19078.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/fsid.19078.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/magic.19078.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/magic.19078.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/journal_uuid.19078.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/journal_uuid.19078.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.pnZHZR/journal -> /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/type.19078.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/type.19078.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.pnZHZR\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.pnZHZR\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:10'\", \"+common_functions.sh:13: log(): echo '2018-10-02 12:41:10 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid fe117fde-832c-4763-a5e3-451d4d10d6a6 /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:fe117fde-832c-4763-a5e3-451d4d10d6a6 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdb\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:91ed2c1d-609c-486f-a066-6419a5472482 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdb1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\", \"mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.pnZHZR with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.pnZHZR\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.pnZHZR\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.pnZHZR\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/ceph_fsid.19078.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/ceph_fsid.19078.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/fsid.19078.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/fsid.19078.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/magic.19078.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/magic.19078.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/journal_uuid.19078.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/journal_uuid.19078.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.pnZHZR/journal -> /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/type.19078.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/type.19078.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.pnZHZR\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.pnZHZR\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-10-02 12:41:10 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-10-02 12:41:10 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-10-02 12:41:10 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-10-02 12:41:10 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdb\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\\n2018-10-02 12:41:10 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdb2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdb1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-10-02 12:41:10 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-10-02 12:41:10 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-10-02 12:41:10 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-10-02 12:41:10 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdb\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\", \"2018-10-02 12:41:10 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdb2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdb1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdc -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdc -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.806459\", \"end\": \"2018-10-02 12:41:23.974937\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:17.168478\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:17'\\n+common_functions.sh:13: log(): echo '2018-10-02 12:41:17 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 8b8ec385-16bc-490b-b98d-385540b0f964 /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:8b8ec385-16bc-490b-b98d-385540b0f964 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdc\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:6eb57385-48f5-4f84-abb9-66bc21d04543 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdc1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\\nmount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.gjpag9 with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.gjpag9\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.gjpag9\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.gjpag9\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/ceph_fsid.19338.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/ceph_fsid.19338.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/fsid.19338.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/fsid.19338.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/magic.19338.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/magic.19338.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/journal_uuid.19338.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/journal_uuid.19338.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.gjpag9/journal -> /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/type.19338.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/type.19338.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.gjpag9\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.gjpag9\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:17'\", \"+common_functions.sh:13: log(): echo '2018-10-02 12:41:17 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 8b8ec385-16bc-490b-b98d-385540b0f964 /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:8b8ec385-16bc-490b-b98d-385540b0f964 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdc\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:6eb57385-48f5-4f84-abb9-66bc21d04543 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdc1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\", \"mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.gjpag9 with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.gjpag9\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.gjpag9\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.gjpag9\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/ceph_fsid.19338.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/ceph_fsid.19338.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/fsid.19338.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/fsid.19338.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/magic.19338.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/magic.19338.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/journal_uuid.19338.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/journal_uuid.19338.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.gjpag9/journal -> /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/type.19338.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/type.19338.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.gjpag9\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.gjpag9\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-10-02 12:41:17 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-10-02 12:41:17 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-10-02 12:41:17 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-10-02 12:41:17 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdc\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-10-02 12:41:17 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdc2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdc1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-10-02 12:41:17 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-10-02 12:41:17 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-10-02 12:41:17 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-10-02 12:41:17 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdc\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-10-02 12:41:17 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdc2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdc1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdd -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdd -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.754927\", \"end\": \"2018-10-02 12:41:30.908051\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:24.153124\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:24'\\n+common_functions.sh:13: log(): echo '2018-10-02 12:41:24 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid fef6486a-e7cf-4964-b234-b91f87a44ac9 /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:fef6486a-e7cf-4964-b234-b91f87a44ac9 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdd\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:1ecc8cb2-d418-4bbb-9eb1-7f16b4f8d236 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdd1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\\nmount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.Mg8JM3 with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.Mg8JM3\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Mg8JM3\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Mg8JM3\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/ceph_fsid.19596.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/ceph_fsid.19596.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/fsid.19596.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/fsid.19596.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/magic.19596.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/magic.19596.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/journal_uuid.19596.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/journal_uuid.19596.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Mg8JM3/journal -> /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/type.19596.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/type.19596.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.Mg8JM3\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Mg8JM3\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:24'\", \"+common_functions.sh:13: log(): echo '2018-10-02 12:41:24 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid fef6486a-e7cf-4964-b234-b91f87a44ac9 /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:fef6486a-e7cf-4964-b234-b91f87a44ac9 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdd\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:1ecc8cb2-d418-4bbb-9eb1-7f16b4f8d236 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdd1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\", \"mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.Mg8JM3 with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.Mg8JM3\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Mg8JM3\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Mg8JM3\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/ceph_fsid.19596.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/ceph_fsid.19596.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/fsid.19596.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/fsid.19596.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/magic.19596.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/magic.19596.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/journal_uuid.19596.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/journal_uuid.19596.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Mg8JM3/journal -> /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/type.19596.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/type.19596.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.Mg8JM3\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Mg8JM3\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-10-02 12:41:24 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-10-02 12:41:24 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-10-02 12:41:24 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-10-02 12:41:24 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdd\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-10-02 12:41:24 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdd2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdd1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-10-02 12:41:24 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-10-02 12:41:24 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-10-02 12:41:24 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-10-02 12:41:24 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdd\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-10-02 12:41:24 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdd2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdd1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vde -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vde -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.615148\", \"end\": \"2018-10-02 12:41:37.692462\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:31.077314\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:31'\\n+common_functions.sh:13: log(): echo '2018-10-02 12:41:31 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid ef76da91-06ef-48f2-ac83-44e036954486 /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_type: Will colocate journal with data on /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:ef76da91-06ef-48f2-ac83-44e036954486 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vde\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:0293c581-8b59-4892-ba50-68ac61ecb1c6 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vde1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\\nmount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.KWOrR0 with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.KWOrR0\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.KWOrR0\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.KWOrR0\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/ceph_fsid.19855.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/ceph_fsid.19855.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/fsid.19855.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/fsid.19855.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/magic.19855.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/magic.19855.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/journal_uuid.19855.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/journal_uuid.19855.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.KWOrR0/journal -> /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/type.19855.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/type.19855.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.KWOrR0\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.KWOrR0\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:31'\", \"+common_functions.sh:13: log(): echo '2018-10-02 12:41:31 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid ef76da91-06ef-48f2-ac83-44e036954486 /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:ef76da91-06ef-48f2-ac83-44e036954486 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vde\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:0293c581-8b59-4892-ba50-68ac61ecb1c6 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vde1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\", \"mount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.KWOrR0 with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.KWOrR0\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.KWOrR0\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.KWOrR0\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/ceph_fsid.19855.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/ceph_fsid.19855.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/fsid.19855.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/fsid.19855.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/magic.19855.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/magic.19855.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/journal_uuid.19855.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/journal_uuid.19855.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.KWOrR0/journal -> /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/type.19855.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/type.19855.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.KWOrR0\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.KWOrR0\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-10-02 12:41:31 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-10-02 12:41:31 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-10-02 12:41:31 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-10-02 12:41:31 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vde\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.e5J3z0HHLE' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-10-02 12:41:31 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vde2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vde1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-10-02 12:41:31 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-10-02 12:41:31 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-10-02 12:41:31 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-10-02 12:41:31 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vde\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.e5J3z0HHLE' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-10-02 12:41:31 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vde2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vde1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdf -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdf -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:07.002085\", \"end\": \"2018-10-02 12:41:44.891974\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:37.889889\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:38'\\n+common_functions.sh:13: log(): echo '2018-10-02 12:41:38 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 3ae85ed2-2af1-464d-87a1-0d5f98798701 /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:3ae85ed2-2af1-464d-87a1-0d5f98798701 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdf\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:84fd63db-59ea-4e51-953d-be7355a12f83 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdf1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\\nmount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.G0typD with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.G0typD\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.G0typD\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.G0typD\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/ceph_fsid.20115.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/ceph_fsid.20115.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/fsid.20115.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/fsid.20115.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/magic.20115.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/magic.20115.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/journal_uuid.20115.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/journal_uuid.20115.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.G0typD/journal -> /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/type.20115.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/type.20115.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.G0typD\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.G0typD\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:38'\", \"+common_functions.sh:13: log(): echo '2018-10-02 12:41:38 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 3ae85ed2-2af1-464d-87a1-0d5f98798701 /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:3ae85ed2-2af1-464d-87a1-0d5f98798701 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdf\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:84fd63db-59ea-4e51-953d-be7355a12f83 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdf1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\", \"mount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.G0typD with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.G0typD\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.G0typD\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.G0typD\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/ceph_fsid.20115.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/ceph_fsid.20115.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/fsid.20115.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/fsid.20115.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/magic.20115.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/magic.20115.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/journal_uuid.20115.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/journal_uuid.20115.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.G0typD/journal -> /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/type.20115.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/type.20115.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.G0typD\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.G0typD\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-10-02 12:41:38 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-10-02 12:41:38 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-10-02 12:41:38 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-10-02 12:41:38 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdf\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.e5J3z0HHLE' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.9bLn7X0Tn2' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-10-02 12:41:38 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdf2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdf1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-10-02 12:41:38 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-10-02 12:41:38 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-10-02 12:41:38 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-10-02 12:41:38 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdf\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.e5J3z0HHLE' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.9bLn7X0Tn2' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-10-02 12:41:38 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdf2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdf1' from root:disk to ceph:ceph\"]}\n\nTASK [ceph-osd : automatic prepare ceph containerized osd disk collocated] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:30\nTuesday 02 October 2018 08:41:44 -0400 (0:00:35.127) 0:02:57.714 ******* \nskipping: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"item\": \"/dev/vdb\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"item\": \"/dev/vdc\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"item\": \"/dev/vdd\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"item\": \"/dev/vde\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"item\": \"/dev/vdf\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : manually prepare ceph \"filestore\" non-containerized osd disk(s) with collocated osd data and journal] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53\nTuesday 02 October 2018 08:41:45 -0400 (0:00:00.069) 0:02:57.784 ******* \nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include scenarios/non-collocated.yml] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:48\nTuesday 02 October 2018 08:41:45 -0400 (0:00:00.100) 0:02:57.885 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include scenarios/lvm.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:56\nTuesday 02 October 2018 08:41:45 -0400 (0:00:00.044) 0:02:57.929 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include activate_osds.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:64\nTuesday 02 October 2018 08:41:45 -0400 (0:00:00.039) 0:02:57.969 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include start_osds.yml] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:72\nTuesday 02 October 2018 08:41:45 -0400 (0:00:00.040) 0:02:58.009 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include docker/main.yml] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:80\nTuesday 02 October 2018 08:41:45 -0400 (0:00:00.042) 0:02:58.051 ******* \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml for ceph-0\n\nTASK [ceph-osd : include start_docker_osd.yml] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml:2\nTuesday 02 October 2018 08:41:45 -0400 (0:00:00.081) 0:02:58.133 ******* \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml for ceph-0\n\nTASK [ceph-osd : umount ceph disk (if on openstack)] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:4\nTuesday 02 October 2018 08:41:45 -0400 (0:00:00.061) 0:02:58.194 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : test if the container image has the disk_list function] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:13\nTuesday 02 October 2018 08:41:45 -0400 (0:00:00.044) 0:02:58.239 ******* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint=stat\", \"192.168.24.1:8787/rhceph:3-12\", \"disk_list.sh\"], \"delta\": \"0:00:00.363866\", \"end\": \"2018-10-02 12:41:46.173239\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:41:45.809373\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: 'disk_list.sh'\\n Size: 4074 \\tBlocks: 8 IO Block: 4096 regular file\\nDevice: 2ah/42d\\tInode: 10557679 Links: 1\\nAccess: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\\nAccess: 2018-08-06 22:27:40.000000000 +0000\\nModify: 2018-08-06 22:27:40.000000000 +0000\\nChange: 2018-10-02 12:40:47.417875170 +0000\\n Birth: -\", \"stdout_lines\": [\" File: 'disk_list.sh'\", \" Size: 4074 \\tBlocks: 8 IO Block: 4096 regular file\", \"Device: 2ah/42d\\tInode: 10557679 Links: 1\", \"Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\", \"Access: 2018-08-06 22:27:40.000000000 +0000\", \"Modify: 2018-08-06 22:27:40.000000000 +0000\", \"Change: 2018-10-02 12:40:47.417875170 +0000\", \" Birth: -\"]}\n\nTASK [ceph-osd : generate ceph osd docker run script] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:19\nTuesday 02 October 2018 08:41:46 -0400 (0:00:00.739) 0:02:58.978 ******* \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"5542e950125b3dbd25e146575a148538f90dc2a6\", \"dest\": \"/usr/share/ceph-osd-run.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"81913dc490826e0e8f21ed305bd0867e\", \"mode\": \"0744\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:usr_t:s0\", \"size\": 964, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484106.28-253602508062744/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:30\nTuesday 02 October 2018 08:41:47 -0400 (0:00:00.963) 0:02:59.942 ******* \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"b7abfb86a4af8d6e54d349965cae96bf9b995c49\", \"dest\": \"/etc/systemd/system/ceph-osd@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8a53f95e6590750e7c4807589dd5864c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 496, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484107.23-64473267333508/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : systemd start osd container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:41\nTuesday 02 October 2018 08:41:48 -0400 (0:00:00.830) 0:03:00.772 ******* \nchanged: [ceph-0] => (item=/dev/vdb) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdb\", \"name\": \"ceph-osd@vdb\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service systemd-journald.socket basic.target system-ceph\\\\x5cx2dosd.slice\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdb.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14903\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14903\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdb.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vdc) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdc\", \"name\": \"ceph-osd@vdc\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice systemd-journald.socket docker.service basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdc.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14903\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14903\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdc.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vdd) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdd\", \"name\": \"ceph-osd@vdd\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice basic.target docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdd.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14903\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14903\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdd.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vde) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vde\", \"name\": \"ceph-osd@vde\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket basic.target system-ceph\\\\x5cx2dosd.slice docker.service\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vde.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14903\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14903\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vde.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vdf) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdf\", \"name\": \"ceph-osd@vdf\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice systemd-journald.socket docker.service basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdf.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14903\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14903\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdf.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-osd : set_fact openstack_keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:87\nTuesday 02 October 2018 08:41:51 -0400 (0:00:03.162) 0:03:03.934 ******* \nskipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": false, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact keys - override keys_tmp with keys] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:95\nTuesday 02 October 2018 08:41:51 -0400 (0:00:00.069) 0:03:04.004 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : wait for all osd to be up] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:2\nTuesday 02 October 2018 08:41:51 -0400 (0:00:00.077) 0:03:04.082 ******* \nchanged: [ceph-0 -> 192.168.24.10] => {\"attempts\": 1, \"changed\": true, \"cmd\": \"test \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_osds\\\"])')\\\" = \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_up_osds\\\"])')\\\"\", \"delta\": \"0:00:00.797558\", \"end\": \"2018-10-02 12:41:52.398515\", \"rc\": 0, \"start\": \"2018-10-02 12:41:51.600957\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : list existing pool(s)] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12\nTuesday 02 October 2018 08:41:52 -0400 (0:00:01.150) 0:03:05.232 ******* \nchanged: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.398746\", \"end\": \"2018-10-02 12:41:53.125999\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:52.727253\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.10] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.323270\", \"end\": \"2018-10-02 12:41:53.665322\", \"failed_when_result\": false, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:53.342052\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.313361\", \"end\": \"2018-10-02 12:41:54.175960\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:53.862599\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.334943\", \"end\": \"2018-10-02 12:41:54.713587\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:54.378644\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.322561\", \"end\": \"2018-10-02 12:41:55.241928\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:54.919367\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : set_fact rule_name before luminous] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21\nTuesday 02 October 2018 08:41:55 -0400 (0:00:02.813) 0:03:08.046 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact rule_name from luminous] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:28\nTuesday 02 October 2018 08:41:55 -0400 (0:00:00.051) 0:03:08.097 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"rule_name\": \"replicated_rule\"}, \"changed\": false}\n\nTASK [ceph-osd : create openstack pool(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:35\nTuesday 02 October 2018 08:41:55 -0400 (0:00:00.135) 0:03:08.233 ******* \nok: [ceph-0 -> 192.168.24.10] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'images'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'images', u'size'], u'end': u'2018-10-02 12:41:53.125999', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.398746', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'images'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:52.727253', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"images\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.041837\", \"end\": \"2018-10-02 12:41:56.791957\", \"item\": [{\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.10\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.398746\", \"end\": \"2018-10-02 12:41:53.125999\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:52.727253\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-10-02 12:41:55.750120\", \"stderr\": \"pool 'images' created\", \"stderr_lines\": [\"pool 'images' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.10] => (item=[{u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'metrics'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'metrics', u'size'], u'end': u'2018-10-02 12:41:53.665322', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.323270', '_ansible_item_label': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'metrics'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:53.342052', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"metrics\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.061873\", \"end\": \"2018-10-02 12:41:58.086015\", \"item\": [{\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.10\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.323270\", \"end\": \"2018-10-02 12:41:53.665322\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:53.342052\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-10-02 12:41:57.024142\", \"stderr\": \"pool 'metrics' created\", \"stderr_lines\": [\"pool 'metrics' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.10] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'backups'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'backups', u'size'], u'end': u'2018-10-02 12:41:54.175960', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.313361', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'backups'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:53.862599', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"backups\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.012535\", \"end\": \"2018-10-02 12:41:59.314032\", \"item\": [{\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.10\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.313361\", \"end\": \"2018-10-02 12:41:54.175960\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:53.862599\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-10-02 12:41:58.301497\", \"stderr\": \"pool 'backups' created\", \"stderr_lines\": [\"pool 'backups' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.10] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'vms'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'vms', u'size'], u'end': u'2018-10-02 12:41:54.713587', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.334943', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'vms'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:54.378644', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"vms\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.006895\", \"end\": \"2018-10-02 12:42:00.545270\", \"item\": [{\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.10\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.334943\", \"end\": \"2018-10-02 12:41:54.713587\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:54.378644\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-10-02 12:41:59.538375\", \"stderr\": \"pool 'vms' created\", \"stderr_lines\": [\"pool 'vms' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.10] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'volumes'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'volumes', u'size'], u'end': u'2018-10-02 12:41:55.241928', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.322561', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'volumes'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:54.919367', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"volumes\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.163136\", \"end\": \"2018-10-02 12:42:01.934968\", \"item\": [{\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.10\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.322561\", \"end\": \"2018-10-02 12:41:55.241928\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:54.919367\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-10-02 12:42:00.771832\", \"stderr\": \"pool 'volumes' created\", \"stderr_lines\": [\"pool 'volumes' created\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : assign application to pool(s)] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:55\nTuesday 02 October 2018 08:42:02 -0400 (0:00:06.557) 0:03:14.790 ******* \nok: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"images\", \"rbd\"], \"delta\": \"0:00:00.647796\", \"end\": \"2018-10-02 12:42:02.943465\", \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:02.295669\", \"stderr\": \"enabled application 'rbd' on pool 'images'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.10] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"metrics\", \"openstack_gnocchi\"], \"delta\": \"0:00:00.820625\", \"end\": \"2018-10-02 12:42:03.967637\", \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:03.147012\", \"stderr\": \"enabled application 'openstack_gnocchi' on pool 'metrics'\", \"stderr_lines\": [\"enabled application 'openstack_gnocchi' on pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"backups\", \"rbd\"], \"delta\": \"0:00:00.763479\", \"end\": \"2018-10-02 12:42:04.954489\", \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:04.191010\", \"stderr\": \"enabled application 'rbd' on pool 'backups'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"vms\", \"rbd\"], \"delta\": \"0:00:00.824001\", \"end\": \"2018-10-02 12:42:05.997335\", \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:05.173334\", \"stderr\": \"enabled application 'rbd' on pool 'vms'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"volumes\", \"rbd\"], \"delta\": \"0:00:00.747992\", \"end\": \"2018-10-02 12:42:06.956609\", \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:06.208617\", \"stderr\": \"enabled application 'rbd' on pool 'volumes'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create openstack cephx key(s)] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:64\nTuesday 02 October 2018 08:42:07 -0400 (0:00:05.019) 0:03:19.810 ******* \nchanged: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.796814\", \"end\": \"2018-10-02 12:42:08.282332\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:07.485518\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.manila.keyring\"], \"delta\": \"0:00:00.851542\", \"end\": \"2018-10-02 12:42:09.348227\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:08.496685\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.838943\", \"end\": \"2018-10-02 12:42:10.388446\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:09.549503\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : fetch openstack cephx key(s)] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:77\nTuesday 02 October 2018 08:42:10 -0400 (0:00:03.414) 0:03:23.224 ******* \nchanged: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": true, \"checksum\": \"64fff1482317a1d8364a6da8e84d29db06535fbc\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.client.openstack.keyring\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"md5sum\": \"dd3eb3ded7a35db5efca563964aa5ef4\", \"remote_checksum\": \"64fff1482317a1d8364a6da8e84d29db06535fbc\", \"remote_md5sum\": null}\nchanged: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": true, \"checksum\": \"5b562922a577010a9622d5ab7f25776e35e06a5e\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.client.manila.keyring\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"md5sum\": \"f8ebf4d94e396034a17e0a1209fd2c2c\", \"remote_checksum\": \"5b562922a577010a9622d5ab7f25776e35e06a5e\", \"remote_md5sum\": null}\nchanged: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": true, \"checksum\": \"17aec2a4c51a0277cc4caf052ea82bb5a542ffb8\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.client.radosgw.keyring\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"md5sum\": \"44072b3418cd73c910a4c8ab96e42054\", \"remote_checksum\": \"17aec2a4c51a0277cc4caf052ea82bb5a542ffb8\", \"remote_md5sum\": null}\n\nTASK [ceph-osd : copy to other mons the openstack cephx key(s)] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:85\nTuesday 02 October 2018 08:42:11 -0400 (0:00:00.615) 0:03:23.840 ******* \nchanged: [ceph-0 -> 192.168.24.10] => (item=[u'controller-0', {u'name': u'client.openstack', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}]) => {\"changed\": true, \"checksum\": \"64fff1482317a1d8364a6da8e84d29db06535fbc\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.openstack.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 253, \"state\": \"file\", \"uid\": 167}\nchanged: [ceph-0 -> 192.168.24.10] => (item=[u'controller-0', {u'name': u'client.manila', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}]) => {\"changed\": true, \"checksum\": \"5b562922a577010a9622d5ab7f25776e35e06a5e\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.manila.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 268, \"state\": \"file\", \"uid\": 167}\nchanged: [ceph-0 -> 192.168.24.10] => (item=[u'controller-0', {u'name': u'client.radosgw', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}]) => {\"changed\": true, \"checksum\": \"17aec2a4c51a0277cc4caf052ea82bb5a542ffb8\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 134, \"state\": \"file\", \"uid\": 167}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nTuesday 02 October 2018 08:42:12 -0400 (0:00:01.226) 0:03:25.067 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nTuesday 02 October 2018 08:42:12 -0400 (0:00:00.190) 0:03:25.258 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:12 -0400 (0:00:00.048) 0:03:25.306 ******* \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nTuesday 02 October 2018 08:42:12 -0400 (0:00:00.087) 0:03:25.393 ******* \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nTuesday 02 October 2018 08:42:12 -0400 (0:00:00.090) 0:03:25.484 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nTuesday 02 October 2018 08:42:12 -0400 (0:00:00.201) 0:03:25.685 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nTuesday 02 October 2018 08:42:13 -0400 (0:00:00.195) 0:03:25.881 ******* \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"6631c34a339c45ab1081b01015293e952e36893e\", \"dest\": \"/tmp/restart_osd_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"308c89936c25e77f74e78c1e4905ee1a\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 3081, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484133.34-274789297423069/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:13 -0400 (0:00:00.713) 0:03:26.594 ******* \nskipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nTuesday 02 October 2018 08:42:13 -0400 (0:00:00.077) 0:03:26.672 ******* \nskipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.085) 0:03:26.757 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.176) 0:03:26.934 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.184) 0:03:27.118 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.046) 0:03:27.165 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.053) 0:03:27.218 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.055) 0:03:27.274 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.161) 0:03:27.435 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.075) 0:03:27.511 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.050) 0:03:27.561 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.059) 0:03:27.621 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.049) 0:03:27.670 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nTuesday 02 October 2018 08:42:14 -0400 (0:00:00.064) 0:03:27.734 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.065) 0:03:27.800 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.044) 0:03:27.844 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.053) 0:03:27.898 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.049) 0:03:27.947 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.065) 0:03:28.012 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.065) 0:03:28.078 ******* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.041) 0:03:28.120 ******* \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.078) 0:03:28.198 ******* \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.077) 0:03:28.276 ******* \nok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph osd install 'Complete'] *****************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:156\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.093) 0:03:28.369 ******* \nok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"end\": \"20181002084215Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY [mdss] ********************************************************************\nskipping: no hosts matched\n\nPLAY [rgws] ********************************************************************\nskipping: no hosts matched\n\nPLAY [nfss] ********************************************************************\nskipping: no hosts matched\n\nPLAY [rbdmirrors] **************************************************************\nskipping: no hosts matched\n\nPLAY [restapis] ****************************************************************\nskipping: no hosts matched\n\nPLAY [clients] *****************************************************************\n\nTASK [set ceph client install 'In Progress'] ***********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:307\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.154) 0:03:28.524 ******* \nok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"start\": \"20181002084215Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.083) 0:03:28.608 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.047) 0:03:28.656 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nTuesday 02 October 2018 08:42:15 -0400 (0:00:00.049) 0:03:28.705 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.053) 0:03:28.759 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.046) 0:03:28.805 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:28.851 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.048) 0:03:28.900 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.049) 0:03:28.949 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.046) 0:03:28.995 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.053) 0:03:29.049 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.048) 0:03:29.097 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.047) 0:03:29.145 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.048) 0:03:29.194 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.240 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.046) 0:03:29.286 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.046) 0:03:29.333 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.047) 0:03:29.380 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.426 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.471 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.516 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.044) 0:03:29.561 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.047) 0:03:29.608 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.653 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.698 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nTuesday 02 October 2018 08:42:16 -0400 (0:00:00.044) 0:03:29.743 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nTuesday 02 October 2018 08:42:17 -0400 (0:00:00.046) 0:03:29.789 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nTuesday 02 October 2018 08:42:17 -0400 (0:00:00.046) 0:03:29.836 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nTuesday 02 October 2018 08:42:17 -0400 (0:00:00.047) 0:03:29.884 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nTuesday 02 October 2018 08:42:17 -0400 (0:00:00.056) 0:03:29.941 ******* \nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nTuesday 02 October 2018 08:42:17 -0400 (0:00:00.241) 0:03:30.183 ******* \nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nTuesday 02 October 2018 08:42:17 -0400 (0:00:00.075) 0:03:30.259 ******* \nok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nTuesday 02 October 2018 08:42:17 -0400 (0:00:00.078) 0:03:30.337 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nTuesday 02 October 2018 08:42:17 -0400 (0:00:00.075) 0:03:30.413 ******* \nok: [compute-0 -> 192.168.24.10] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nTuesday 02 October 2018 08:42:17 -0400 (0:00:00.151) 0:03:30.564 ******* \nok: [compute-0 -> 192.168.24.10] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.392824\", \"end\": \"2018-10-02 12:42:18.410684\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:42:18.017860\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":18,\\\"num_osds\\\":5,\\\"num_up_osds\\\":5,\\\"num_in_osds\\\":5,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[{\\\"state_name\\\":\\\"active+clean\\\",\\\"count\\\":160}],\\\"num_pgs\\\":160,\\\"num_pools\\\":5,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":565141504,\\\"bytes_avail\\\":55748530176,\\\"bytes_total\\\":56313671680},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.15:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":18,\\\"num_osds\\\":5,\\\"num_up_osds\\\":5,\\\"num_in_osds\\\":5,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[{\\\"state_name\\\":\\\"active+clean\\\",\\\"count\\\":160}],\\\"num_pgs\\\":160,\\\"num_pools\\\":5,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":565141504,\\\"bytes_avail\\\":55748530176,\\\"bytes_total\\\":56313671680},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.15:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nTuesday 02 October 2018 08:42:18 -0400 (0:00:00.650) 0:03:31.215 ******* \nok: [compute-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nTuesday 02 October 2018 08:42:18 -0400 (0:00:00.192) 0:03:31.407 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nTuesday 02 October 2018 08:42:18 -0400 (0:00:00.053) 0:03:31.461 ******* \nok: [compute-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nTuesday 02 October 2018 08:42:18 -0400 (0:00:00.192) 0:03:31.654 ******* \nok: [compute-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"172.17.3.15:6800/79\", \"active_gid\": 4104, \"active_name\": \"controller-0\", \"available\": true, \"available_modules\": [\"balancer\", \"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"restful\", \"selftest\", \"status\", \"zabbix\"], \"epoch\": 7, \"modules\": [\"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-10-02 12:39:39.460029\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"modified\": \"2018-10-02 12:39:39.460029\", \"mons\": [{\"addr\": \"172.17.3.15:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.15:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 18, \"full\": false, \"nearfull\": false, \"num_in_osds\": 5, \"num_osds\": 5, \"num_remapped_pgs\": 0, \"num_up_osds\": 5}}, \"pgmap\": {\"bytes_avail\": 55748530176, \"bytes_total\": 56313671680, \"bytes_used\": 565141504, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 160, \"num_pools\": 5, \"pgs_by_state\": [{\"count\": 160, \"state_name\": \"active+clean\"}]}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nTuesday 02 October 2018 08:42:18 -0400 (0:00:00.085) 0:03:31.740 ******* \nok: [compute-0] => {\"ansible_facts\": {\"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88\nTuesday 02 October 2018 08:42:19 -0400 (0:00:00.075) 0:03:31.816 ******* \nok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92\nTuesday 02 October 2018 08:42:19 -0400 (0:00:00.190) 0:03:32.007 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103\nTuesday 02 October 2018 08:42:19 -0400 (0:00:00.052) 0:03:32.059 ******* \nok: [compute-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 4398e5b0-c63c-11e8-b95a-525400c8bd81 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112\nTuesday 02 October 2018 08:42:19 -0400 (0:00:00.204) 0:03:32.263 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124\nTuesday 02 October 2018 08:42:19 -0400 (0:00:00.047) 0:03:32.311 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130\nTuesday 02 October 2018 08:42:19 -0400 (0:00:00.043) 0:03:32.354 ******* \nok: [compute-0] => {\"ansible_facts\": {\"mds_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136\nTuesday 02 October 2018 08:42:19 -0400 (0:00:00.204) 0:03:32.559 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nTuesday 02 October 2018 08:42:19 -0400 (0:00:00.046) 0:03:32.605 ******* \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nTuesday 02 October 2018 08:42:20 -0400 (0:00:00.201) 0:03:32.807 ******* \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nTuesday 02 October 2018 08:42:20 -0400 (0:00:00.205) 0:03:33.013 ******* \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163\nTuesday 02 October 2018 08:42:20 -0400 (0:00:00.192) 0:03:33.206 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173\nTuesday 02 October 2018 08:42:20 -0400 (0:00:00.059) 0:03:33.266 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182\nTuesday 02 October 2018 08:42:20 -0400 (0:00:00.181) 0:03:33.447 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nTuesday 02 October 2018 08:42:20 -0400 (0:00:00.047) 0:03:33.495 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nTuesday 02 October 2018 08:42:20 -0400 (0:00:00.051) 0:03:33.546 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nTuesday 02 October 2018 08:42:20 -0400 (0:00:00.049) 0:03:33.596 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nTuesday 02 October 2018 08:42:20 -0400 (0:00:00.051) 0:03:33.647 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218\nTuesday 02 October 2018 08:42:20 -0400 (0:00:00.052) 0:03:33.700 ******* \nok: [compute-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rgw_hostname] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225\nTuesday 02 October 2018 08:42:21 -0400 (0:00:00.082) 0:03:33.782 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nTuesday 02 October 2018 08:42:21 -0400 (0:00:00.047) 0:03:33.830 ******* \nok: [compute-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nTuesday 02 October 2018 08:42:21 -0400 (0:00:00.074) 0:03:33.905 ******* \nchanged: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nTuesday 02 October 2018 08:42:23 -0400 (0:00:02.115) 0:03:36.020 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nTuesday 02 October 2018 08:42:23 -0400 (0:00:00.053) 0:03:36.074 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nTuesday 02 October 2018 08:42:23 -0400 (0:00:00.053) 0:03:36.127 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nTuesday 02 October 2018 08:42:23 -0400 (0:00:00.053) 0:03:36.181 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nTuesday 02 October 2018 08:42:23 -0400 (0:00:00.053) 0:03:36.234 ******* \nok: [compute-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [compute-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nTuesday 02 October 2018 08:42:23 -0400 (0:00:00.451) 0:03:36.686 ******* \nok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nTuesday 02 October 2018 08:42:24 -0400 (0:00:00.083) 0:03:36.769 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nTuesday 02 October 2018 08:42:24 -0400 (0:00:00.043) 0:03:36.813 ******* \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.025770\", \"end\": \"2018-10-02 12:42:24.227023\", \"rc\": 0, \"start\": \"2018-10-02 12:42:24.201253\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 8633870/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 8633870/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nTuesday 02 October 2018 08:42:24 -0400 (0:00:00.264) 0:03:37.077 ******* \nok: [compute-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nTuesday 02 October 2018 08:42:24 -0400 (0:00:00.081) 0:03:37.159 ******* \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-compute-0\"], \"delta\": \"0:00:00.022877\", \"end\": \"2018-10-02 12:42:24.572834\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:42:24.549957\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nTuesday 02 October 2018 08:42:24 -0400 (0:00:00.261) 0:03:37.421 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nTuesday 02 October 2018 08:42:24 -0400 (0:00:00.058) 0:03:37.479 ******* \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nTuesday 02 October 2018 08:42:24 -0400 (0:00:00.066) 0:03:37.546 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nTuesday 02 October 2018 08:42:24 -0400 (0:00:00.056) 0:03:37.602 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nTuesday 02 October 2018 08:42:24 -0400 (0:00:00.064) 0:03:37.666 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nTuesday 02 October 2018 08:42:24 -0400 (0:00:00.051) 0:03:37.718 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.049) 0:03:37.768 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.043) 0:03:37.811 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.041) 0:03:37.853 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.044) 0:03:37.897 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.058) 0:03:37.956 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.053) 0:03:38.010 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.050) 0:03:38.060 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.050) 0:03:38.110 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.048) 0:03:38.158 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.048) 0:03:38.207 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.059) 0:03:38.267 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.050) 0:03:38.317 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.046) 0:03:38.363 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.046) 0:03:38.410 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.055) 0:03:38.465 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.054) 0:03:38.520 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.062) 0:03:38.583 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.064) 0:03:38.648 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nTuesday 02 October 2018 08:42:25 -0400 (0:00:00.055) 0:03:38.704 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.053) 0:03:38.757 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.050) 0:03:38.808 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.050) 0:03:38.859 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.060) 0:03:38.920 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.051) 0:03:38.971 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.051) 0:03:39.022 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.049) 0:03:39.072 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.048) 0:03:39.120 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.050) 0:03:39.171 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.057) 0:03:39.228 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nTuesday 02 October 2018 08:42:26 -0400 (0:00:00.052) 0:03:39.281 ******* \nok: [compute-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:14.082760\", \"end\": \"2018-10-02 12:42:40.766740\", \"rc\": 0, \"start\": \"2018-10-02 12:42:26.683980\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Verifying Checksum\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Verifying Checksum\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Verifying Checksum\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Verifying Checksum\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nTuesday 02 October 2018 08:42:40 -0400 (0:00:14.341) 0:03:53.623 ******* \nchanged: [compute-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.027387\", \"end\": \"2018-10-02 12:42:41.058482\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:42:41.031095\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/b8d0a98064d555daef74d7b023d00f17de29f7cfd26a4f21a98a3ca39f66136f/diff:/var/lib/docker/overlay2/3dafe6d2bc5c1dbf6269c88efd0920f9a59be9445b59cbf5f08594f915afa247/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/b8d0a98064d555daef74d7b023d00f17de29f7cfd26a4f21a98a3ca39f66136f/diff:/var/lib/docker/overlay2/3dafe6d2bc5c1dbf6269c88efd0920f9a59be9445b59cbf5f08594f915afa247/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.301) 0:03:53.925 ******* \nok: [compute-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.094) 0:03:54.019 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.051) 0:03:54.070 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.060) 0:03:54.131 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.048) 0:03:54.179 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.050) 0:03:54.230 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.052) 0:03:54.282 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.046) 0:03:54.329 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.052) 0:03:54.381 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.047) 0:03:54.429 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.056) 0:03:54.486 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.051) 0:03:54.537 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nTuesday 02 October 2018 08:42:41 -0400 (0:00:00.054) 0:03:54.591 ******* \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.448590\", \"end\": \"2018-10-02 12:42:42.421062\", \"rc\": 0, \"start\": \"2018-10-02 12:42:41.972472\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nTuesday 02 October 2018 08:42:42 -0400 (0:00:00.683) 0:03:55.275 ******* \nok: [compute-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nTuesday 02 October 2018 08:42:42 -0400 (0:00:00.186) 0:03:55.461 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nTuesday 02 October 2018 08:42:42 -0400 (0:00:00.049) 0:03:55.511 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nTuesday 02 October 2018 08:42:42 -0400 (0:00:00.049) 0:03:55.560 ******* \nok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nTuesday 02 October 2018 08:42:43 -0400 (0:00:00.195) 0:03:55.756 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nTuesday 02 October 2018 08:42:43 -0400 (0:00:00.049) 0:03:55.806 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nTuesday 02 October 2018 08:42:43 -0400 (0:00:00.056) 0:03:55.862 ******* \nchanged: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nTuesday 02 October 2018 08:42:44 -0400 (0:00:01.032) 0:03:56.895 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nTuesday 02 October 2018 08:42:44 -0400 (0:00:00.048) 0:03:56.944 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nTuesday 02 October 2018 08:42:44 -0400 (0:00:00.054) 0:03:56.998 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nTuesday 02 October 2018 08:42:44 -0400 (0:00:00.174) 0:03:57.173 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nTuesday 02 October 2018 08:42:44 -0400 (0:00:00.052) 0:03:57.225 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nTuesday 02 October 2018 08:42:44 -0400 (0:00:00.046) 0:03:57.272 ******* \nchanged: [compute-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nTuesday 02 October 2018 08:42:44 -0400 (0:00:00.235) 0:03:57.507 ******* \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for compute-0\nchanged: [compute-0] => {\"changed\": true, \"checksum\": \"55b1f0577e67c2bfbbd30f40df9ea9b389d9639b\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"61444335eb3c3ef3239f2dde50381d2b\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1320, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484164.81-36246248334490/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nTuesday 02 October 2018 08:42:46 -0400 (0:00:02.175) 0:03:59.683 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : copy ceph admin keyring when non containerized deployment] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml:2\nTuesday 02 October 2018 08:42:46 -0400 (0:00:00.054) 0:03:59.738 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : set_fact keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:2\nTuesday 02 October 2018 08:42:47 -0400 (0:00:00.044) 0:03:59.782 ******* \nskipping: [compute-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": false, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : set_fact keys - override keys_tmp with keys] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:9\nTuesday 02 October 2018 08:42:47 -0400 (0:00:00.069) 0:03:59.852 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : create filtered clients group] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:20\nTuesday 02 October 2018 08:42:47 -0400 (0:00:00.046) 0:03:59.899 ******* \ncreating host via 'add_host': hostname=compute-0\nchanged: [compute-0] => (item=compute-0) => {\"add_host\": {\"groups\": [\"_filtered_clients\"], \"host_name\": \"compute-0\", \"host_vars\": {}}, \"changed\": true, \"item\": \"compute-0\"}\n\nTASK [ceph-client : run a dummy container (sleep 300) from where we can create pool(s)/key(s)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:28\nTuesday 02 October 2018 08:42:47 -0400 (0:00:00.116) 0:04:00.015 ******* \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"-d\", \"-v\", \"/etc/ceph:/etc/ceph:z\", \"--name\", \"ceph-create-keys\", \"--entrypoint=sleep\", \"192.168.24.1:8787/rhceph:3-12\", \"300\"], \"delta\": \"0:00:00.233408\", \"end\": \"2018-10-02 12:42:47.633663\", \"rc\": 0, \"start\": \"2018-10-02 12:42:47.400255\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"28f3fe1dc230246e335c7a5a364dba44e8d407c872d491398c86c6d79a098f3e\", \"stdout_lines\": [\"28f3fe1dc230246e335c7a5a364dba44e8d407c872d491398c86c6d79a098f3e\"]}\n\nTASK [ceph-client : set_fact delegated_node] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:43\nTuesday 02 October 2018 08:42:47 -0400 (0:00:00.468) 0:04:00.484 ******* \nok: [compute-0] => {\"ansible_facts\": {\"delegated_node\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-client : set_fact condition_copy_admin_key] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:47\nTuesday 02 October 2018 08:42:47 -0400 (0:00:00.073) 0:04:00.557 ******* \nok: [compute-0] => {\"ansible_facts\": {\"condition_copy_admin_key\": true}, \"changed\": false}\n\nTASK [ceph-client : set_fact docker_exec_cmd] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:51\nTuesday 02 October 2018 08:42:47 -0400 (0:00:00.077) 0:04:00.635 ******* \nok: [compute-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0 \"}, \"changed\": false}\n\nTASK [ceph-client : create cephx key(s)] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:57\nTuesday 02 October 2018 08:42:48 -0400 (0:00:00.137) 0:04:00.772 ******* \nchanged: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.910400\", \"end\": \"2018-10-02 12:42:49.145072\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:48.234672\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.manila.keyring\"], \"delta\": \"0:00:00.869185\", \"end\": \"2018-10-02 12:42:50.299889\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:49.430704\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.928657\", \"end\": \"2018-10-02 12:42:51.412385\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:50.483728\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-client : slurp client cephx key(s)] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:75\nTuesday 02 October 2018 08:42:51 -0400 (0:00:03.469) 0:04:04.241 ******* \nok: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBWjMrM2JrL1NtTy9nK0psWXZCWDQxUT09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}\nok: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBTDRpd3lRNnZBOWx1Z1VEdEI1ZmFpZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}\nok: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCa1lMTmJBQUFBQUJBQWlJaTY4WUVnZWtPenBCa0pTU2lONGc9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}\n\nTASK [ceph-client : list existing pool(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:87\nTuesday 02 October 2018 08:42:52 -0400 (0:00:00.606) 0:04:04.848 ******* \n\nTASK [ceph-client : create ceph pool(s)] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:99\nTuesday 02 October 2018 08:42:52 -0400 (0:00:00.055) 0:04:04.903 ******* \n\nTASK [ceph-client : get client cephx keys] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:122\nTuesday 02 October 2018 08:42:52 -0400 (0:00:00.048) 0:04:04.952 ******* \nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBWjMrM2JrL1NtTy9nK0psWXZCWDQxUT09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.openstack.keyring', 'item': {u'mode': u'0600', u'name': u'client.openstack', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.openstack.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.openstack', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}}) => {\"changed\": true, \"checksum\": \"64fff1482317a1d8364a6da8e84d29db06535fbc\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBWjMrM2JrL1NtTy9nK0psWXZCWDQxUT09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.openstack.keyring\"}}, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}, \"md5sum\": \"dd3eb3ded7a35db5efca563964aa5ef4\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 253, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484172.41-235840774940833/source\", \"state\": \"file\", \"uid\": 167}\nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBTDRpd3lRNnZBOWx1Z1VEdEI1ZmFpZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.manila.keyring', 'item': {u'mode': u'0600', u'name': u'client.manila', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.manila.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.manila', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}}) => {\"changed\": true, \"checksum\": \"5b562922a577010a9622d5ab7f25776e35e06a5e\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBTDRpd3lRNnZBOWx1Z1VEdEI1ZmFpZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.manila.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}, \"md5sum\": \"f8ebf4d94e396034a17e0a1209fd2c2c\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 268, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484172.89-139378585418982/source\", \"state\": \"file\", \"uid\": 167}\nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCa1lMTmJBQUFBQUJBQWlJaTY4WUVnZWtPenBCa0pTU2lONGc9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=', 'failed': False, u'source': u'/etc/ceph/ceph.client.radosgw.keyring', 'item': {u'mode': u'0600', u'name': u'client.radosgw', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.radosgw.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.radosgw', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}}) => {\"changed\": true, \"checksum\": \"17aec2a4c51a0277cc4caf052ea82bb5a542ffb8\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCa1lMTmJBQUFBQUJBQWlJaTY4WUVnZWtPenBCa0pTU2lONGc9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.radosgw.keyring\"}}, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}, \"md5sum\": \"44072b3418cd73c910a4c8ab96e42054\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 134, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484173.36-204390493911564/source\", \"state\": \"file\", \"uid\": 167}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nTuesday 02 October 2018 08:42:53 -0400 (0:00:01.624) 0:04:06.576 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.191) 0:04:06.767 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.048) 0:04:06.816 ******* \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.084) 0:04:06.900 ******* \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.199) 0:04:07.099 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.157) 0:04:07.257 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.074) 0:04:07.332 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.046) 0:04:07.378 ******* \nskipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.085) 0:04:07.464 ******* \nskipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.085) 0:04:07.549 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.076) 0:04:07.626 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nTuesday 02 October 2018 08:42:54 -0400 (0:00:00.077) 0:04:07.704 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.049) 0:04:07.754 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.059) 0:04:07.813 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.056) 0:04:07.869 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.076) 0:04:07.946 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.077) 0:04:08.023 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.048) 0:04:08.071 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.061) 0:04:08.133 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.058) 0:04:08.192 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.073) 0:04:08.266 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.077) 0:04:08.343 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.046) 0:04:08.389 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.054) 0:04:08.444 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.057) 0:04:08.501 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.079) 0:04:08.580 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.076) 0:04:08.657 ******* \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nTuesday 02 October 2018 08:42:55 -0400 (0:00:00.047) 0:04:08.705 ******* \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nTuesday 02 October 2018 08:42:56 -0400 (0:00:00.086) 0:04:08.791 ******* \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nTuesday 02 October 2018 08:42:56 -0400 (0:00:00.082) 0:04:08.873 ******* \nok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph client install 'Complete'] **************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:324\nTuesday 02 October 2018 08:42:56 -0400 (0:00:00.105) 0:04:08.979 ******* \nok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"end\": \"20181002084256Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY RECAP *********************************************************************\nceph-0 : ok=88 changed=19 unreachable=0 failed=0 \ncompute-0 : ok=56 changed=8 unreachable=0 failed=0 \ncontroller-0 : ok=121 changed=22 unreachable=0 failed=0 \n\n\nINSTALLER STATUS ***************************************************************\nInstall Ceph Monitor : Complete (0:01:02)\nInstall Ceph Manager : Complete (0:00:25)\nInstall Ceph OSD : Complete (0:01:47)\nInstall Ceph Client : Complete (0:00:41)\n\nTuesday 02 October 2018 08:42:56 -0400 (0:00:00.067) 0:04:09.046 ******* \n=============================================================================== ", "stdout_lines": ["ansible-playbook 2.5.7", " config file = /usr/share/ceph-ansible/ansible.cfg", " configured module search path = [u'/usr/share/ceph-ansible/library']", " ansible python module location = /usr/lib/python2.7/site-packages/ansible", " executable location = /usr/bin/ansible-playbook", " python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]", "Using /usr/share/ceph-ansible/ansible.cfg as config file", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml", "", "PLAYBOOK: site-docker.yml.sample ***********************************************", "12 plays in /usr/share/ceph-ansible/site-docker.yml.sample", "", "PLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,iscsi-gws,mgrs] ***", "", "TASK [gather facts] ************************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:25", "Tuesday 02 October 2018 08:38:47 -0400 (0:00:00.215) 0:00:00.215 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [gather and delegate facts] ***********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:30", "Tuesday 02 October 2018 08:38:47 -0400 (0:00:00.086) 0:00:00.302 ******* ", "ok: [controller-0 -> 192.168.24.12] => (item=compute-0)", "ok: [controller-0 -> 192.168.24.10] => (item=controller-0)", "ok: [controller-0 -> 192.168.24.8] => (item=ceph-0)", "", "TASK [check if it is atomic host] **********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:39", "Tuesday 02 October 2018 08:39:00 -0400 (0:00:13.098) 0:00:13.400 ******* ", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [set_fact is_atomic] ******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:46", "Tuesday 02 October 2018 08:39:01 -0400 (0:00:00.433) 0:00:13.833 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "TASK [pull rhceph image] *******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:66", "Tuesday 02 October 2018 08:39:01 -0400 (0:00:00.251) 0:00:14.085 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:76", "Tuesday 02 October 2018 08:39:01 -0400 (0:00:00.122) 0:00:14.207 ******* ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20181002083901Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Tuesday 02 October 2018 08:39:01 -0400 (0:00:00.247) 0:00:14.455 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.031513\", \"end\": \"2018-10-02 12:39:02.007429\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:39:01.975916\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.349) 0:00:14.804 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.047) 0:00:14.851 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.047) 0:00:14.899 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.049) 0:00:14.948 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.024345\", \"end\": \"2018-10-02 12:39:02.404826\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:39:02.380481\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.255) 0:00:15.203 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.051) 0:00:15.255 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.048) 0:00:15.304 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.045) 0:00:15.350 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.046) 0:00:15.396 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.053) 0:00:15.450 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.051) 0:00:15.501 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.046) 0:00:15.547 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.047) 0:00:15.595 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.045) 0:00:15.640 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.047) 0:00:15.688 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Tuesday 02 October 2018 08:39:02 -0400 (0:00:00.046) 0:00:15.734 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.050) 0:00:15.784 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.048) 0:00:15.832 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.050) 0:00:15.882 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.051) 0:00:15.934 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.058) 0:00:15.993 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.053) 0:00:16.046 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.051) 0:00:16.098 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.050) 0:00:16.149 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.051) 0:00:16.200 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.051) 0:00:16.251 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.047) 0:00:16.299 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.045) 0:00:16.345 ******* ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.230) 0:00:16.576 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.072) 0:00:16.648 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Tuesday 02 October 2018 08:39:03 -0400 (0:00:00.084) 0:00:16.733 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Tuesday 02 October 2018 08:39:04 -0400 (0:00:00.087) 0:00:16.820 ******* ", "ok: [controller-0 -> 192.168.24.10] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Tuesday 02 October 2018 08:39:04 -0400 (0:00:00.160) 0:00:16.980 ******* ", "ok: [controller-0 -> 192.168.24.10] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.025706\", \"end\": \"2018-10-02 12:39:04.446788\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-10-02 12:39:04.421082\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Tuesday 02 October 2018 08:39:04 -0400 (0:00:00.275) 0:00:17.255 ******* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Tuesday 02 October 2018 08:39:04 -0400 (0:00:00.192) 0:00:17.447 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Tuesday 02 October 2018 08:39:04 -0400 (0:00:00.053) 0:00:17.501 ******* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.424) 0:00:17.925 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.053) 0:00:17.978 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.048) 0:00:18.027 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.080) 0:00:18.108 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.049) 0:00:18.157 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.048) 0:00:18.205 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.044) 0:00:18.250 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.055) 0:00:18.306 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.181) 0:00:18.487 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.047) 0:00:18.535 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Tuesday 02 October 2018 08:39:05 -0400 (0:00:00.180) 0:00:18.716 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.185) 0:00:18.901 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.248) 0:00:19.149 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.048) 0:00:19.198 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.046) 0:00:19.245 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.047) 0:00:19.292 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.044) 0:00:19.337 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.046) 0:00:19.384 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.044) 0:00:19.428 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.048) 0:00:19.477 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rgw_hostname] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.068) 0:00:19.545 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.044) 0:00:19.589 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Tuesday 02 October 2018 08:39:06 -0400 (0:00:00.067) 0:00:19.657 ******* ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Tuesday 02 October 2018 08:39:09 -0400 (0:00:02.110) 0:00:21.768 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Tuesday 02 October 2018 08:39:09 -0400 (0:00:00.055) 0:00:21.824 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Tuesday 02 October 2018 08:39:09 -0400 (0:00:00.062) 0:00:21.886 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Tuesday 02 October 2018 08:39:09 -0400 (0:00:00.051) 0:00:21.937 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Tuesday 02 October 2018 08:39:09 -0400 (0:00:00.050) 0:00:21.988 ******* ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Tuesday 02 October 2018 08:39:09 -0400 (0:00:00.426) 0:00:22.415 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Tuesday 02 October 2018 08:39:09 -0400 (0:00:00.081) 0:00:22.496 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Tuesday 02 October 2018 08:39:09 -0400 (0:00:00.045) 0:00:22.542 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.024330\", \"end\": \"2018-10-02 12:39:09.999720\", \"rc\": 0, \"start\": \"2018-10-02 12:39:09.975390\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 8633870/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 8633870/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Tuesday 02 October 2018 08:39:10 -0400 (0:00:00.256) 0:00:22.798 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Tuesday 02 October 2018 08:39:10 -0400 (0:00:00.070) 0:00:22.869 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.024567\", \"end\": \"2018-10-02 12:39:10.319930\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:39:10.295363\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Tuesday 02 October 2018 08:39:10 -0400 (0:00:00.252) 0:00:23.121 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Tuesday 02 October 2018 08:39:10 -0400 (0:00:00.085) 0:00:23.207 ******* ", "ok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Tuesday 02 October 2018 08:39:10 -0400 (0:00:00.137) 0:00:23.344 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Tuesday 02 October 2018 08:39:10 -0400 (0:00:00.084) 0:00:23.429 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Tuesday 02 October 2018 08:39:10 -0400 (0:00:00.096) 0:00:23.526 ******* ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:01.235) 0:00:24.761 ******* ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.304) 0:00:25.066 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.049) 0:00:25.116 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.044) 0:00:25.160 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.051) 0:00:25.212 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.052) 0:00:25.264 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.055) 0:00:25.320 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.058) 0:00:25.378 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.049) 0:00:25.428 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.053) 0:00:25.481 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.056) 0:00:25.537 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.048) 0:00:25.586 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.053) 0:00:25.639 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Tuesday 02 October 2018 08:39:12 -0400 (0:00:00.051) 0:00:25.691 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.059) 0:00:25.750 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.051) 0:00:25.802 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.050) 0:00:25.853 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.057) 0:00:25.910 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.059) 0:00:25.970 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.050) 0:00:26.021 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.051) 0:00:26.072 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.053) 0:00:26.125 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.048) 0:00:26.173 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.050) 0:00:26.224 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.053) 0:00:26.278 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.048) 0:00:26.327 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.051) 0:00:26.378 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.050) 0:00:26.429 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.055) 0:00:26.484 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.052) 0:00:26.537 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Tuesday 02 October 2018 08:39:13 -0400 (0:00:00.049) 0:00:26.586 ******* ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:13.823581\", \"end\": \"2018-10-02 12:39:27.938363\", \"rc\": 0, \"start\": \"2018-10-02 12:39:14.114782\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Verifying Checksum\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Verifying Checksum\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Verifying Checksum\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Verifying Checksum\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Tuesday 02 October 2018 08:39:27 -0400 (0:00:14.156) 0:00:40.743 ******* ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.027445\", \"end\": \"2018-10-02 12:39:28.211179\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:39:28.183734\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.282) 0:00:41.025 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.084) 0:00:41.109 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.057) 0:00:41.167 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.055) 0:00:41.222 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.052) 0:00:41.274 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.048) 0:00:41.323 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.054) 0:00:41.378 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.051) 0:00:41.430 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.058) 0:00:41.488 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.052) 0:00:41.541 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.051) 0:00:41.592 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.051) 0:00:41.644 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Tuesday 02 October 2018 08:39:28 -0400 (0:00:00.052) 0:00:41.696 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.460525\", \"end\": \"2018-10-02 12:39:29.600571\", \"rc\": 0, \"start\": \"2018-10-02 12:39:29.140046\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Tuesday 02 October 2018 08:39:29 -0400 (0:00:00.704) 0:00:42.401 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Tuesday 02 October 2018 08:39:29 -0400 (0:00:00.082) 0:00:42.483 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Tuesday 02 October 2018 08:39:29 -0400 (0:00:00.050) 0:00:42.534 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Tuesday 02 October 2018 08:39:29 -0400 (0:00:00.048) 0:00:42.582 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Tuesday 02 October 2018 08:39:29 -0400 (0:00:00.082) 0:00:42.665 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Tuesday 02 October 2018 08:39:29 -0400 (0:00:00.056) 0:00:42.721 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Tuesday 02 October 2018 08:39:30 -0400 (0:00:00.047) 0:00:42.769 ******* ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Tuesday 02 October 2018 08:39:30 -0400 (0:00:00.949) 0:00:43.718 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Tuesday 02 October 2018 08:39:31 -0400 (0:00:00.055) 0:00:43.773 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Tuesday 02 October 2018 08:39:31 -0400 (0:00:00.051) 0:00:43.824 ******* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Tuesday 02 October 2018 08:39:31 -0400 (0:00:00.214) 0:00:44.039 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Tuesday 02 October 2018 08:39:31 -0400 (0:00:00.055) 0:00:44.095 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Tuesday 02 October 2018 08:39:31 -0400 (0:00:00.048) 0:00:44.143 ******* ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Tuesday 02 October 2018 08:39:31 -0400 (0:00:00.255) 0:00:44.398 ******* ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"d7acef6abeb4e7853e1cf2b7e41f2f58868cad4a\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"a31e326b2b79369b2901aa2d0f318a37\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538483971.7-281398146065481/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:02.513) 0:00:46.912 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.053) 0:00:46.965 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.079) 0:00:47.044 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate monitor initial keyring] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.059) 0:00:47.103 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : read monitor initial keyring if it already exists] ************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.061) 0:00:47.165 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create monitor initial keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.052) 0:00:47.218 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set initial monitor key permissions] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.052) 0:00:47.271 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create (and fix ownership of) monitor directory] **************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.049) 0:00:47.321 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.050) 0:00:47.371 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.051) 0:00:47.423 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create custom admin keyring] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.055) 0:00:47.478 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set ownership of admin keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.055) 0:00:47.533 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : import admin keyring into mon keyring] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.054) 0:00:47.588 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs with keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.052) 0:00:47.641 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs without keyring] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113", "Tuesday 02 October 2018 08:39:34 -0400 (0:00:00.052) 0:00:47.693 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.061) 0:00:47.755 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add ceph-mon systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.052) 0:00:47.807 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : start the monitor service] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.052) 0:00:47.860 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : enable the ceph-mon.target service] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.051) 0:00:47.912 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : include ceph_keys.yml] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.051) 0:00:47.963 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : collect all the pools] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.054) 0:00:48.018 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : secure the cluster] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.058) 0:00:48.077 ******* ", "", "TASK [ceph-mon : set_fact ceph_config_keys] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.055) 0:00:48.132 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : register rbd bootstrap key] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:11", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.081) 0:00:48.214 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:17", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.094) 0:00:48.308 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : stat for ceph config and keys] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:22", "Tuesday 02 October 2018 08:39:35 -0400 (0:00:00.088) 0:00:48.397 ******* ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-mon : try to copy ceph keys] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:33", "Tuesday 02 October 2018 08:39:36 -0400 (0:00:00.943) 0:00:49.341 ******* ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with default ceph.conf] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2", "Tuesday 02 October 2018 08:39:36 -0400 (0:00:00.153) 0:00:49.494 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with custom ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18", "Tuesday 02 October 2018 08:39:36 -0400 (0:00:00.055) 0:00:49.550 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : delete populate-kv-store docker] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36", "Tuesday 02 October 2018 08:39:36 -0400 (0:00:00.057) 0:00:49.607 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43", "Tuesday 02 October 2018 08:39:36 -0400 (0:00:00.046) 0:00:49.654 ******* ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"30dd79ca23c7e5e775a5e6dab299d35ee19c6909\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"e0f5a6276ad9be3c40dea6db9c92e5a5\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 887, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538483976.95-259272916863923/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : systemd start mon container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54", "Tuesday 02 October 2018 08:39:37 -0400 (0:00:00.872) 0:00:50.526 ******* ", "changed: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dmon.slice docker.service basic.target systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --memory=3g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.15 -e CLUSTER=ceph -e FSID=4398e5b0-c63c-11e8-b95a-525400c8bd81 -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-12 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/bin/rm ; argv[]=/bin/rm -f /var/run/ceph/ceph-mon.controller-0.asok ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127792\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127792\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mon : configure ceph profile.d aliases] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2", "Tuesday 02 October 2018 08:39:38 -0400 (0:00:00.702) 0:00:51.229 ******* ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538483978.52-29330794934663/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : wait for monitor socket to exist] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12", "Tuesday 02 October 2018 08:39:39 -0400 (0:00:00.552) 0:00:51.781 ******* ", "FAILED - RETRYING: wait for monitor socket to exist (5 retries left).", "changed: [controller-0] => {\"attempts\": 2, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.078298\", \"end\": \"2018-10-02 12:39:54.690204\", \"rc\": 0, \"start\": \"2018-10-02 12:39:54.611906\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 14h/20d\\tInode: 333080 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-10-02 12:39:39.563714788 +0000\\nModify: 2018-10-02 12:39:39.563714788 +0000\\nChange: 2018-10-02 12:39:39.563714788 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 14h/20d\\tInode: 333080 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-10-02 12:39:39.563714788 +0000\", \"Modify: 2018-10-02 12:39:39.563714788 +0000\", \"Change: 2018-10-02 12:39:39.563714788 +0000\", \" Birth: -\"]}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19", "Tuesday 02 October 2018 08:39:54 -0400 (0:00:15.711) 0:01:07.493 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29", "Tuesday 02 October 2018 08:39:54 -0400 (0:00:00.093) 0:01:07.586 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39", "Tuesday 02 October 2018 08:39:54 -0400 (0:00:00.094) 0:01:07.681 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.15\"], \"delta\": \"0:00:00.185103\", \"end\": \"2018-10-02 12:39:55.500572\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:39:55.315469\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49", "Tuesday 02 October 2018 08:39:55 -0400 (0:00:00.620) 0:01:08.301 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59", "Tuesday 02 October 2018 08:39:55 -0400 (0:00:00.054) 0:01:08.356 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69", "Tuesday 02 October 2018 08:39:55 -0400 (0:00:00.050) 0:01:08.406 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : push ceph files to the ansible server] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2", "Tuesday 02 October 2018 08:39:55 -0400 (0:00:00.051) 0:01:08.457 ******* ", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": true, \"checksum\": \"d677a326bd647888546790f10e2cedd45b16b16c\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"646a5e052b42e51b88bae71199ef2c70\", \"remote_checksum\": \"d677a326bd647888546790f10e2cedd45b16b16c\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": true, \"checksum\": \"55ce938694f0ed88cb9c4903bdb60b986ace7379\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"ccd6e55e13b5360a1ecae7b8e03bf9a5\", \"remote_checksum\": \"55ce938694f0ed88cb9c4903bdb60b986ace7379\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"f28d2d0af61547531ab0fa31ff23aca020f498eb\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"096130d29629dd16899b5da08c7a169f\", \"remote_checksum\": \"f28d2d0af61547531ab0fa31ff23aca020f498eb\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"4ad6235f1694fb6b72596dffe07b7a3347c382b4\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"97fe4cceddcdc2d86e0280a1ab8e043f\", \"remote_checksum\": \"4ad6235f1694fb6b72596dffe07b7a3347c382b4\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"4d16e08847d6079bcd8caa2adf07e9012cb0f41e\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"3d4af3a8907c988c7836372c7316a585\", \"remote_checksum\": \"4d16e08847d6079bcd8caa2adf07e9012cb0f41e\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"5255ad2e079bcf92a5703629e8cbeb93fa79b47a\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"d4da3e0de49fbf15ede7c6d2d32e75d0\", \"remote_checksum\": \"5255ad2e079bcf92a5703629e8cbeb93fa79b47a\", \"remote_md5sum\": null}", "", "TASK [ceph-mon : create ceph rest api keyring when mon is containerized] *******", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:84", "Tuesday 02 October 2018 08:39:57 -0400 (0:00:01.383) 0:01:09.841 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create ceph mgr keyring(s) when mon is containerized] *********", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:97", "Tuesday 02 October 2018 08:39:57 -0400 (0:00:00.050) 0:01:09.892 ******* ", "ok: [controller-0] => (item=controller-0) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"get-or-create\", \"mgr.controller-0\", \"mon\", \"allow profile mgr\", \"osd\", \"allow *\", \"mds\", \"allow *\", \"-o\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"], \"delta\": \"0:00:00.380099\", \"end\": \"2018-10-02 12:39:57.938971\", \"item\": \"controller-0\", \"rc\": 0, \"start\": \"2018-10-02 12:39:57.558872\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-mon : stat for ceph mgr key(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:109", "Tuesday 02 October 2018 08:39:57 -0400 (0:00:00.849) 0:01:10.741 ******* ", "ok: [controller-0] => (item=controller-0) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"controller-0\", \"stat\": {\"atime\": 1538483997.808753, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"ctime\": 1538483997.9187531, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 73662102, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1538483997.9187531, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744071792120930\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "", "TASK [ceph-mon : fetch ceph mgr key(s)] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:121", "Tuesday 02 October 2018 08:39:58 -0400 (0:00:00.404) 0:01:11.146 ******* ", "changed: [controller-0] => (item={'_ansible_parsed': True, u'stat': {u'charset': u'us-ascii', u'uid': 0, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483997.9187531, u'block_size': 4096, u'inode': 73662102, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': u'18446744071792120930', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1538483997.808753, u'mimetype': u'text/plain', u'ctime': 1538483997.9187531, u'isblk': False, u'checksum': u'8bb7be95a8da65439da12aedf5f2fdd1235025df', u'dev': 64514, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, 'failed': False, u'changed': False, 'item': u'controller-0', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'controller-0'}) => {\"changed\": true, \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.mgr.controller-0.keyring\", \"item\": {\"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"controller-0\", \"stat\": {\"atime\": 1538483997.808753, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"ctime\": 1538483997.9187531, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 73662102, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1538483997.9187531, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744071792120930\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}, \"md5sum\": \"91380060d243fe3cf688ad21a60a8ace\", \"remote_checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"remote_md5sum\": null}", "", "TASK [ceph-mon : configure crush hierarchy] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:2", "Tuesday 02 October 2018 08:39:58 -0400 (0:00:00.426) 0:01:11.572 ******* ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create configured crush rules] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:14", "Tuesday 02 October 2018 08:39:58 -0400 (0:00:00.059) 0:01:11.632 ******* ", "skipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : get id for new default crush rule] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:21", "Tuesday 02 October 2018 08:39:58 -0400 (0:00:00.065) 0:01:11.697 ******* ", "skipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact info_ceph_default_crush_rule_yaml] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:33", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.067) 0:01:11.765 ******* ", "skipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_crush_rule to osd_pool_default_crush_replicated_ruleset if release < luminous else osd_pool_default_crush_rule] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:41", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.062) 0:01:11.827 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : insert new default crush rule into daemon to prevent restart] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:45", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.082) 0:01:11.910 ******* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add new default crush rule to ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:54", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.196) 0:01:12.106 ******* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : get default value for osd_pool_default_pg_num] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:5", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.059) 0:01:12.166 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num with pool_default_pg_num (backward compatibility)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:16", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.058) 0:01:12.225 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num with default_pool_default_pg_num.stdout] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:21", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.053) 0:01:12.279 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num ceph_conf_overrides.global.osd_pool_default_pg_num] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:27", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.061) 0:01:12.340 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"osd_pool_default_pg_num\": \"32\"}, \"changed\": false}", "", "TASK [ceph-mon : test if calamari-server is installed] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:2", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.085) 0:01:12.425 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : increase calamari logging level when debug is on] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:18", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.046) 0:01:12.471 ******* ", "skipping: [controller-0] => (item=cthulhu) => {\"changed\": false, \"item\": \"cthulhu\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=calamari_web) => {\"changed\": false, \"item\": \"calamari_web\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : initialize the calamari server api] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:29", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.053) 0:01:12.524 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.016) 0:01:12.541 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Tuesday 02 October 2018 08:39:59 -0400 (0:00:00.073) 0:01:12.614 ******* ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"83f7af8323e264039a95f266faedb4a665c8f4ca\", \"dest\": \"/tmp/restart_mon_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"a72fe8d7f7ff92960aa2e96a1b3fe152\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 1398, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538483999.94-68990543604263/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Tuesday 02 October 2018 08:40:00 -0400 (0:00:00.554) 0:01:13.169 ******* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Tuesday 02 October 2018 08:40:00 -0400 (0:00:00.095) 0:01:13.265 ******* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Tuesday 02 October 2018 08:40:00 -0400 (0:00:00.135) 0:01:13.401 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Tuesday 02 October 2018 08:40:00 -0400 (0:00:00.074) 0:01:13.476 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Tuesday 02 October 2018 08:40:00 -0400 (0:00:00.066) 0:01:13.542 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Tuesday 02 October 2018 08:40:00 -0400 (0:00:00.045) 0:01:13.588 ******* ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Tuesday 02 October 2018 08:40:00 -0400 (0:00:00.088) 0:01:13.676 ******* ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.091) 0:01:13.767 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.076) 0:01:13.844 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.073) 0:01:13.918 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.049) 0:01:13.967 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.058) 0:01:14.026 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.060) 0:01:14.086 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.074) 0:01:14.160 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.072) 0:01:14.232 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.049) 0:01:14.282 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.062) 0:01:14.344 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.062) 0:01:14.407 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.077) 0:01:14.485 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.070) 0:01:14.556 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.049) 0:01:14.605 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.058) 0:01:14.664 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Tuesday 02 October 2018 08:40:01 -0400 (0:00:00.054) 0:01:14.718 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Tuesday 02 October 2018 08:40:02 -0400 (0:00:00.080) 0:01:14.799 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Tuesday 02 October 2018 08:40:02 -0400 (0:00:00.077) 0:01:14.877 ******* ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"73c8d33ad2b3c95d77ee4b411e06cae6\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484002.21-134431239702871/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Tuesday 02 October 2018 08:40:02 -0400 (0:00:00.591) 0:01:15.468 ******* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Tuesday 02 October 2018 08:40:02 -0400 (0:00:00.093) 0:01:15.562 ******* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Tuesday 02 October 2018 08:40:02 -0400 (0:00:00.132) 0:01:15.695 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'Complete'] *************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:98", "Tuesday 02 October 2018 08:40:03 -0400 (0:00:00.112) 0:01:15.808 ******* ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"end\": \"20181002084003Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mgrs] ********************************************************************", "", "TASK [set ceph manager install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:110", "Tuesday 02 October 2018 08:40:03 -0400 (0:00:00.171) 0:01:15.979 ******* ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"start\": \"20181002084003Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Tuesday 02 October 2018 08:40:03 -0400 (0:00:00.092) 0:01:16.071 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.030030\", \"end\": \"2018-10-02 12:40:03.551389\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:03.521359\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"d47994d727c0\", \"stdout_lines\": [\"d47994d727c0\"]}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Tuesday 02 October 2018 08:40:03 -0400 (0:00:00.282) 0:01:16.354 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Tuesday 02 October 2018 08:40:03 -0400 (0:00:00.052) 0:01:16.406 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Tuesday 02 October 2018 08:40:03 -0400 (0:00:00.057) 0:01:16.464 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Tuesday 02 October 2018 08:40:03 -0400 (0:00:00.052) 0:01:16.516 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.025933\", \"end\": \"2018-10-02 12:40:04.113330\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:04.087397\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.401) 0:01:16.917 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.057) 0:01:16.975 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.056) 0:01:17.032 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.242) 0:01:17.274 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.055) 0:01:17.329 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.054) 0:01:17.383 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.054) 0:01:17.438 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.053) 0:01:17.491 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.056) 0:01:17.547 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.053) 0:01:17.601 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.054) 0:01:17.655 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Tuesday 02 October 2018 08:40:04 -0400 (0:00:00.052) 0:01:17.707 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.053) 0:01:17.761 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.052) 0:01:17.814 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.060) 0:01:17.874 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.052) 0:01:17.926 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.053) 0:01:17.980 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.056) 0:01:18.036 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.053) 0:01:18.090 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.057) 0:01:18.147 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.055) 0:01:18.202 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.053) 0:01:18.255 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.052) 0:01:18.308 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.051) 0:01:18.360 ******* ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.245) 0:01:18.606 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Tuesday 02 October 2018 08:40:05 -0400 (0:00:00.081) 0:01:18.687 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Tuesday 02 October 2018 08:40:06 -0400 (0:00:00.081) 0:01:18.768 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Tuesday 02 October 2018 08:40:06 -0400 (0:00:00.075) 0:01:18.844 ******* ", "ok: [controller-0 -> 192.168.24.10] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Tuesday 02 October 2018 08:40:06 -0400 (0:00:00.166) 0:01:19.011 ******* ", "ok: [controller-0 -> 192.168.24.10] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.388132\", \"end\": \"2018-10-02 12:40:06.857442\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:06.469310\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":1,\\\"active_gid\\\":0,\\\"active_name\\\":\\\"\\\",\\\"active_addr\\\":\\\"-\\\",\\\"available\\\":false,\\\"standbys\\\":[],\\\"modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"available_modules\\\":[],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":1,\\\"active_gid\\\":0,\\\"active_name\\\":\\\"\\\",\\\"active_addr\\\":\\\"-\\\",\\\"available\\\":false,\\\"standbys\\\":[],\\\"modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"available_modules\\\":[],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Tuesday 02 October 2018 08:40:06 -0400 (0:00:00.655) 0:01:19.667 ******* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Tuesday 02 October 2018 08:40:07 -0400 (0:00:00.195) 0:01:19.863 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Tuesday 02 October 2018 08:40:07 -0400 (0:00:00.056) 0:01:19.919 ******* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 50, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Tuesday 02 October 2018 08:40:07 -0400 (0:00:00.195) 0:01:20.114 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"-\", \"active_gid\": 0, \"active_name\": \"\", \"available\": false, \"available_modules\": [], \"epoch\": 1, \"modules\": [\"balancer\", \"restful\", \"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-10-02 12:39:39.460029\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"modified\": \"2018-10-02 12:39:39.460029\", \"mons\": [{\"addr\": \"172.17.3.15:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.15:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 1, \"full\": false, \"nearfull\": false, \"num_in_osds\": 0, \"num_osds\": 0, \"num_remapped_pgs\": 0, \"num_up_osds\": 0}}, \"pgmap\": {\"bytes_avail\": 0, \"bytes_total\": 0, \"bytes_used\": 0, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 0, \"num_pools\": 0, \"pgs_by_state\": []}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Tuesday 02 October 2018 08:40:07 -0400 (0:00:00.089) 0:01:20.204 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88", "Tuesday 02 October 2018 08:40:07 -0400 (0:00:00.082) 0:01:20.286 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92", "Tuesday 02 October 2018 08:40:07 -0400 (0:00:00.079) 0:01:20.366 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103", "Tuesday 02 October 2018 08:40:07 -0400 (0:00:00.053) 0:01:20.420 ******* ", "changed: [controller-0 -> localhost] => {\"changed\": true, \"cmd\": \"echo 4398e5b0-c63c-11e8-b95a-525400c8bd81 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"delta\": \"0:00:00.689161\", \"end\": \"2018-10-02 08:40:08.515819\", \"rc\": 0, \"start\": \"2018-10-02 08:40:07.826658\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"stdout_lines\": [\"4398e5b0-c63c-11e8-b95a-525400c8bd81\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112", "Tuesday 02 October 2018 08:40:08 -0400 (0:00:00.901) 0:01:21.322 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124", "Tuesday 02 October 2018 08:40:08 -0400 (0:00:00.053) 0:01:21.376 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130", "Tuesday 02 October 2018 08:40:08 -0400 (0:00:00.051) 0:01:21.427 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136", "Tuesday 02 October 2018 08:40:08 -0400 (0:00:00.086) 0:01:21.514 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Tuesday 02 October 2018 08:40:08 -0400 (0:00:00.049) 0:01:21.564 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Tuesday 02 October 2018 08:40:08 -0400 (0:00:00.053) 0:01:21.618 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Tuesday 02 October 2018 08:40:08 -0400 (0:00:00.054) 0:01:21.672 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163", "Tuesday 02 October 2018 08:40:08 -0400 (0:00:00.055) 0:01:21.727 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173", "Tuesday 02 October 2018 08:40:09 -0400 (0:00:00.059) 0:01:21.787 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182", "Tuesday 02 October 2018 08:40:09 -0400 (0:00:00.056) 0:01:21.844 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Tuesday 02 October 2018 08:40:09 -0400 (0:00:00.052) 0:01:21.897 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Tuesday 02 October 2018 08:40:09 -0400 (0:00:00.051) 0:01:21.949 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Tuesday 02 October 2018 08:40:09 -0400 (0:00:00.049) 0:01:21.998 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Tuesday 02 October 2018 08:40:09 -0400 (0:00:00.051) 0:01:22.050 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218", "Tuesday 02 October 2018 08:40:09 -0400 (0:00:00.061) 0:01:22.112 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rgw_hostname] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225", "Tuesday 02 October 2018 08:40:09 -0400 (0:00:00.204) 0:01:22.316 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Tuesday 02 October 2018 08:40:09 -0400 (0:00:00.052) 0:01:22.369 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Tuesday 02 October 2018 08:40:09 -0400 (0:00:00.181) 0:01:22.551 ******* ", "ok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 160, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 28, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 35, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/run/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 60, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Tuesday 02 October 2018 08:40:12 -0400 (0:00:02.206) 0:01:24.758 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Tuesday 02 October 2018 08:40:12 -0400 (0:00:00.052) 0:01:24.811 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Tuesday 02 October 2018 08:40:12 -0400 (0:00:00.064) 0:01:24.875 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Tuesday 02 October 2018 08:40:12 -0400 (0:00:00.051) 0:01:24.927 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Tuesday 02 October 2018 08:40:12 -0400 (0:00:00.049) 0:01:24.976 ******* ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Tuesday 02 October 2018 08:40:12 -0400 (0:00:00.424) 0:01:25.401 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Tuesday 02 October 2018 08:40:12 -0400 (0:00:00.081) 0:01:25.483 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Tuesday 02 October 2018 08:40:12 -0400 (0:00:00.047) 0:01:25.530 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.022088\", \"end\": \"2018-10-02 12:40:13.005544\", \"rc\": 0, \"start\": \"2018-10-02 12:40:12.983456\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 8633870/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 8633870/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Tuesday 02 October 2018 08:40:13 -0400 (0:00:00.274) 0:01:25.805 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Tuesday 02 October 2018 08:40:13 -0400 (0:00:00.095) 0:01:25.900 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.025656\", \"end\": \"2018-10-02 12:40:13.374926\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:13.349270\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"d47994d727c0\", \"stdout_lines\": [\"d47994d727c0\"]}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Tuesday 02 October 2018 08:40:13 -0400 (0:00:00.277) 0:01:26.177 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Tuesday 02 October 2018 08:40:13 -0400 (0:00:00.068) 0:01:26.246 ******* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Tuesday 02 October 2018 08:40:13 -0400 (0:00:00.068) 0:01:26.315 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Tuesday 02 October 2018 08:40:13 -0400 (0:00:00.084) 0:01:26.399 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Tuesday 02 October 2018 08:40:13 -0400 (0:00:00.064) 0:01:26.464 ******* ", "skipping: [controller-0] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Tuesday 02 October 2018 08:40:13 -0400 (0:00:00.134) 0:01:26.598 ******* ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Tuesday 02 October 2018 08:40:13 -0400 (0:00:00.146) 0:01:26.744 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.048) 0:01:26.793 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.046) 0:01:26.839 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.052) 0:01:26.892 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.056) 0:01:26.949 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.055) 0:01:27.004 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.048) 0:01:27.053 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.048) 0:01:27.102 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.048) 0:01:27.150 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"d47994d727c0\"], \"delta\": \"0:00:00.024196\", \"end\": \"2018-10-02 12:40:14.626563\", \"rc\": 0, \"start\": \"2018-10-02 12:40:14.602367\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57\\\",\\n \\\"Created\\\": \\\"2018-10-02T12:39:38.443855569Z\\\",\\n \\\"Path\\\": \\\"/entrypoint.sh\\\",\\n \\\"Args\\\": [],\\n \\\"State\\\": {\\n \\\"Status\\\": \\\"running\\\",\\n \\\"Running\\\": true,\\n \\\"Paused\\\": false,\\n \\\"Restarting\\\": false,\\n \\\"OOMKilled\\\": false,\\n \\\"Dead\\\": false,\\n \\\"Pid\\\": 45141,\\n \\\"ExitCode\\\": 0,\\n \\\"Error\\\": \\\"\\\",\\n \\\"StartedAt\\\": \\\"2018-10-02T12:39:38.624208881Z\\\",\\n \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\\n },\\n \\\"Image\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/resolv.conf\\\",\\n \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/hostname\\\",\\n \\\"HostsPath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/hosts\\\",\\n \\\"LogPath\\\": \\\"\\\",\\n \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\\n \\\"RestartCount\\\": 0,\\n \\\"Driver\\\": \\\"overlay2\\\",\\n \\\"MountLabel\\\": \\\"\\\",\\n \\\"ProcessLabel\\\": \\\"\\\",\\n \\\"AppArmorProfile\\\": \\\"\\\",\\n \\\"ExecIDs\\\": null,\\n \\\"HostConfig\\\": {\\n \\\"Binds\\\": [\\n \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\\n \\\"/etc/ceph:/etc/ceph:z\\\",\\n \\\"/var/run/ceph:/var/run/ceph:z\\\",\\n \\\"/etc/localtime:/etc/localtime:ro\\\"\\n ],\\n \\\"ContainerIDFile\\\": \\\"\\\",\\n \\\"LogConfig\\\": {\\n \\\"Type\\\": \\\"journald\\\",\\n \\\"Config\\\": {}\\n },\\n \\\"NetworkMode\\\": \\\"host\\\",\\n \\\"PortBindings\\\": {},\\n \\\"RestartPolicy\\\": {\\n \\\"Name\\\": \\\"no\\\",\\n \\\"MaximumRetryCount\\\": 0\\n },\\n \\\"AutoRemove\\\": true,\\n \\\"VolumeDriver\\\": \\\"\\\",\\n \\\"VolumesFrom\\\": null,\\n \\\"CapAdd\\\": null,\\n \\\"CapDrop\\\": null,\\n \\\"Dns\\\": [],\\n \\\"DnsOptions\\\": [],\\n \\\"DnsSearch\\\": [],\\n \\\"ExtraHosts\\\": null,\\n \\\"GroupAdd\\\": null,\\n \\\"IpcMode\\\": \\\"\\\",\\n \\\"Cgroup\\\": \\\"\\\",\\n \\\"Links\\\": null,\\n \\\"OomScoreAdj\\\": 0,\\n \\\"PidMode\\\": \\\"\\\",\\n \\\"Privileged\\\": false,\\n \\\"PublishAllPorts\\\": false,\\n \\\"ReadonlyRootfs\\\": false,\\n \\\"SecurityOpt\\\": null,\\n \\\"UTSMode\\\": \\\"\\\",\\n \\\"UsernsMode\\\": \\\"\\\",\\n \\\"ShmSize\\\": 67108864,\\n \\\"Runtime\\\": \\\"docker-runc\\\",\\n \\\"ConsoleSize\\\": [\\n 0,\\n 0\\n ],\\n \\\"Isolation\\\": \\\"\\\",\\n \\\"CpuShares\\\": 0,\\n \\\"Memory\\\": 3221225472,\\n \\\"NanoCpus\\\": 0,\\n \\\"CgroupParent\\\": \\\"\\\",\\n \\\"BlkioWeight\\\": 0,\\n \\\"BlkioWeightDevice\\\": null,\\n \\\"BlkioDeviceReadBps\\\": null,\\n \\\"BlkioDeviceWriteBps\\\": null,\\n \\\"BlkioDeviceReadIOps\\\": null,\\n \\\"BlkioDeviceWriteIOps\\\": null,\\n \\\"CpuPeriod\\\": 0,\\n \\\"CpuQuota\\\": 100000,\\n \\\"CpuRealtimePeriod\\\": 0,\\n \\\"CpuRealtimeRuntime\\\": 0,\\n \\\"CpusetCpus\\\": \\\"\\\",\\n \\\"CpusetMems\\\": \\\"\\\",\\n \\\"Devices\\\": [],\\n \\\"DiskQuota\\\": 0,\\n \\\"KernelMemory\\\": 0,\\n \\\"MemoryReservation\\\": 0,\\n \\\"MemorySwap\\\": 6442450944,\\n \\\"MemorySwappiness\\\": -1,\\n \\\"OomKillDisable\\\": false,\\n \\\"PidsLimit\\\": 0,\\n \\\"Ulimits\\\": null,\\n \\\"CpuCount\\\": 0,\\n \\\"CpuPercent\\\": 0,\\n \\\"IOMaximumIOps\\\": 0,\\n \\\"IOMaximumBandwidth\\\": 0\\n },\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f-init/diff:/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff:/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/work\\\"\\n }\\n },\\n \\\"Mounts\\\": [\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/lib/ceph\\\",\\n \\\"Destination\\\": \\\"/var/lib/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/ceph\\\",\\n \\\"Destination\\\": \\\"/etc/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/run/ceph\\\",\\n \\\"Destination\\\": \\\"/var/run/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/localtime\\\",\\n \\\"Destination\\\": \\\"/etc/localtime\\\",\\n \\\"Mode\\\": \\\"ro\\\",\\n \\\"RW\\\": false,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n }\\n ],\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"controller-0\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": true,\\n \\\"AttachStderr\\\": true,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"IP_VERSION=4\\\",\\n \\\"MON_IP=172.17.3.15\\\",\\n \\\"CLUSTER=ceph\\\",\\n \\\"FSID=4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\n \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\\n \\\"CEPH_DAEMON=MON\\\",\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-12\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": null,\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"NetworkSettings\\\": {\\n \\\"Bridge\\\": \\\"\\\",\\n \\\"SandboxID\\\": \\\"88005597e5b8601dd06c206a599504f9e06151150e681e9896950ce1dc0e8570\\\",\\n \\\"HairpinMode\\\": false,\\n \\\"LinkLocalIPv6Address\\\": \\\"\\\",\\n \\\"LinkLocalIPv6PrefixLen\\\": 0,\\n \\\"Ports\\\": {},\\n \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\\n \\\"SecondaryIPAddresses\\\": null,\\n \\\"SecondaryIPv6Addresses\\\": null,\\n \\\"EndpointID\\\": \\\"\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"MacAddress\\\": \\\"\\\",\\n \\\"Networks\\\": {\\n \\\"host\\\": {\\n \\\"IPAMConfig\\\": null,\\n \\\"Links\\\": null,\\n \\\"Aliases\\\": null,\\n \\\"NetworkID\\\": \\\"5126de8d808d5c5d8a90d1e72a006d96449de4809ed996069fb1f3b5e4bb5f68\\\",\\n \\\"EndpointID\\\": \\\"fa6cc8203a497c959078fa65db5e9c6f93592bae4497628b9f488f99f597c39a\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"MacAddress\\\": \\\"\\\"\\n }\\n }\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57\\\",\", \" \\\"Created\\\": \\\"2018-10-02T12:39:38.443855569Z\\\",\", \" \\\"Path\\\": \\\"/entrypoint.sh\\\",\", \" \\\"Args\\\": [],\", \" \\\"State\\\": {\", \" \\\"Status\\\": \\\"running\\\",\", \" \\\"Running\\\": true,\", \" \\\"Paused\\\": false,\", \" \\\"Restarting\\\": false,\", \" \\\"OOMKilled\\\": false,\", \" \\\"Dead\\\": false,\", \" \\\"Pid\\\": 45141,\", \" \\\"ExitCode\\\": 0,\", \" \\\"Error\\\": \\\"\\\",\", \" \\\"StartedAt\\\": \\\"2018-10-02T12:39:38.624208881Z\\\",\", \" \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\", \" },\", \" \\\"Image\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/resolv.conf\\\",\", \" \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/hostname\\\",\", \" \\\"HostsPath\\\": \\\"/var/lib/docker/containers/d47994d727c085016bf827559c830d545fd126dd6722856a9da36d99f7de0b57/hosts\\\",\", \" \\\"LogPath\\\": \\\"\\\",\", \" \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\", \" \\\"RestartCount\\\": 0,\", \" \\\"Driver\\\": \\\"overlay2\\\",\", \" \\\"MountLabel\\\": \\\"\\\",\", \" \\\"ProcessLabel\\\": \\\"\\\",\", \" \\\"AppArmorProfile\\\": \\\"\\\",\", \" \\\"ExecIDs\\\": null,\", \" \\\"HostConfig\\\": {\", \" \\\"Binds\\\": [\", \" \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\", \" \\\"/etc/ceph:/etc/ceph:z\\\",\", \" \\\"/var/run/ceph:/var/run/ceph:z\\\",\", \" \\\"/etc/localtime:/etc/localtime:ro\\\"\", \" ],\", \" \\\"ContainerIDFile\\\": \\\"\\\",\", \" \\\"LogConfig\\\": {\", \" \\\"Type\\\": \\\"journald\\\",\", \" \\\"Config\\\": {}\", \" },\", \" \\\"NetworkMode\\\": \\\"host\\\",\", \" \\\"PortBindings\\\": {},\", \" \\\"RestartPolicy\\\": {\", \" \\\"Name\\\": \\\"no\\\",\", \" \\\"MaximumRetryCount\\\": 0\", \" },\", \" \\\"AutoRemove\\\": true,\", \" \\\"VolumeDriver\\\": \\\"\\\",\", \" \\\"VolumesFrom\\\": null,\", \" \\\"CapAdd\\\": null,\", \" \\\"CapDrop\\\": null,\", \" \\\"Dns\\\": [],\", \" \\\"DnsOptions\\\": [],\", \" \\\"DnsSearch\\\": [],\", \" \\\"ExtraHosts\\\": null,\", \" \\\"GroupAdd\\\": null,\", \" \\\"IpcMode\\\": \\\"\\\",\", \" \\\"Cgroup\\\": \\\"\\\",\", \" \\\"Links\\\": null,\", \" \\\"OomScoreAdj\\\": 0,\", \" \\\"PidMode\\\": \\\"\\\",\", \" \\\"Privileged\\\": false,\", \" \\\"PublishAllPorts\\\": false,\", \" \\\"ReadonlyRootfs\\\": false,\", \" \\\"SecurityOpt\\\": null,\", \" \\\"UTSMode\\\": \\\"\\\",\", \" \\\"UsernsMode\\\": \\\"\\\",\", \" \\\"ShmSize\\\": 67108864,\", \" \\\"Runtime\\\": \\\"docker-runc\\\",\", \" \\\"ConsoleSize\\\": [\", \" 0,\", \" 0\", \" ],\", \" \\\"Isolation\\\": \\\"\\\",\", \" \\\"CpuShares\\\": 0,\", \" \\\"Memory\\\": 3221225472,\", \" \\\"NanoCpus\\\": 0,\", \" \\\"CgroupParent\\\": \\\"\\\",\", \" \\\"BlkioWeight\\\": 0,\", \" \\\"BlkioWeightDevice\\\": null,\", \" \\\"BlkioDeviceReadBps\\\": null,\", \" \\\"BlkioDeviceWriteBps\\\": null,\", \" \\\"BlkioDeviceReadIOps\\\": null,\", \" \\\"BlkioDeviceWriteIOps\\\": null,\", \" \\\"CpuPeriod\\\": 0,\", \" \\\"CpuQuota\\\": 100000,\", \" \\\"CpuRealtimePeriod\\\": 0,\", \" \\\"CpuRealtimeRuntime\\\": 0,\", \" \\\"CpusetCpus\\\": \\\"\\\",\", \" \\\"CpusetMems\\\": \\\"\\\",\", \" \\\"Devices\\\": [],\", \" \\\"DiskQuota\\\": 0,\", \" \\\"KernelMemory\\\": 0,\", \" \\\"MemoryReservation\\\": 0,\", \" \\\"MemorySwap\\\": 6442450944,\", \" \\\"MemorySwappiness\\\": -1,\", \" \\\"OomKillDisable\\\": false,\", \" \\\"PidsLimit\\\": 0,\", \" \\\"Ulimits\\\": null,\", \" \\\"CpuCount\\\": 0,\", \" \\\"CpuPercent\\\": 0,\", \" \\\"IOMaximumIOps\\\": 0,\", \" \\\"IOMaximumBandwidth\\\": 0\", \" },\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f-init/diff:/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff:/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/2ac6c842d628e078b0fc968e75841d32c7e08611e3471a33f2cbb8a806235f1f/work\\\"\", \" }\", \" },\", \" \\\"Mounts\\\": [\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/ceph\\\",\", \" \\\"Destination\\\": \\\"/etc/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/run/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/run/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/localtime\\\",\", \" \\\"Destination\\\": \\\"/etc/localtime\\\",\", \" \\\"Mode\\\": \\\"ro\\\",\", \" \\\"RW\\\": false,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" }\", \" ],\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"controller-0\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": true,\", \" \\\"AttachStderr\\\": true,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"IP_VERSION=4\\\",\", \" \\\"MON_IP=172.17.3.15\\\",\", \" \\\"CLUSTER=ceph\\\",\", \" \\\"FSID=4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\", \" \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\", \" \\\"CEPH_DAEMON=MON\\\",\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-12\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": null,\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"NetworkSettings\\\": {\", \" \\\"Bridge\\\": \\\"\\\",\", \" \\\"SandboxID\\\": \\\"88005597e5b8601dd06c206a599504f9e06151150e681e9896950ce1dc0e8570\\\",\", \" \\\"HairpinMode\\\": false,\", \" \\\"LinkLocalIPv6Address\\\": \\\"\\\",\", \" \\\"LinkLocalIPv6PrefixLen\\\": 0,\", \" \\\"Ports\\\": {},\", \" \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\", \" \\\"SecondaryIPAddresses\\\": null,\", \" \\\"SecondaryIPv6Addresses\\\": null,\", \" \\\"EndpointID\\\": \\\"\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"MacAddress\\\": \\\"\\\",\", \" \\\"Networks\\\": {\", \" \\\"host\\\": {\", \" \\\"IPAMConfig\\\": null,\", \" \\\"Links\\\": null,\", \" \\\"Aliases\\\": null,\", \" \\\"NetworkID\\\": \\\"5126de8d808d5c5d8a90d1e72a006d96449de4809ed996069fb1f3b5e4bb5f68\\\",\", \" \\\"EndpointID\\\": \\\"fa6cc8203a497c959078fa65db5e9c6f93592bae4497628b9f488f99f597c39a\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"MacAddress\\\": \\\"\\\"\", \" }\", \" }\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.296) 0:01:27.447 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.056) 0:01:27.503 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.052) 0:01:27.556 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.050) 0:01:27.607 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.057) 0:01:27.664 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Tuesday 02 October 2018 08:40:14 -0400 (0:00:00.052) 0:01:27.717 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.053) 0:01:27.770 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\"], \"delta\": \"0:00:00.026283\", \"end\": \"2018-10-02 12:40:15.239872\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:15.213589\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.284) 0:01:28.055 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.049) 0:01:28.105 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.048) 0:01:28.154 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.047) 0:01:28.202 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.053) 0:01:28.256 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.051) 0:01:28.307 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.049) 0:01:28.357 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mon_image_repodigest_before_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.087) 0:01:28.444 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.046) 0:01:28.490 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.048) 0:01:28.539 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.056) 0:01:28.595 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.053) 0:01:28.648 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Tuesday 02 October 2018 08:40:15 -0400 (0:00:00.050) 0:01:28.699 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Tuesday 02 October 2018 08:40:16 -0400 (0:00:00.050) 0:01:28.750 ******* ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.034272\", \"end\": \"2018-10-02 12:40:16.223576\", \"rc\": 0, \"start\": \"2018-10-02 12:40:16.189304\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Image is up to date for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Image is up to date for 192.168.24.1:8787/rhceph:3-12\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Tuesday 02 October 2018 08:40:16 -0400 (0:00:00.282) 0:01:29.032 ******* ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.024632\", \"end\": \"2018-10-02 12:40:16.594993\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:16.570361\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/764480ce03078d44639be5d67ae0113074ceb893d9dc8edd9181ea33cde8e7eb/diff:/var/lib/docker/overlay2/09bad61f94ac97809557eb701afb65fb6fb0618e9516a1808d1006f117f77853/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/dee3508a552fbdbf03b1fe91fbcd4c485186e308d765e7a1a624f0eb01f23075/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Tuesday 02 October 2018 08:40:16 -0400 (0:00:00.376) 0:01:29.409 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Tuesday 02 October 2018 08:40:16 -0400 (0:00:00.187) 0:01:29.596 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Tuesday 02 October 2018 08:40:16 -0400 (0:00:00.057) 0:01:29.654 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Tuesday 02 October 2018 08:40:16 -0400 (0:00:00.050) 0:01:29.705 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Tuesday 02 October 2018 08:40:17 -0400 (0:00:00.058) 0:01:29.764 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Tuesday 02 October 2018 08:40:17 -0400 (0:00:00.051) 0:01:29.815 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Tuesday 02 October 2018 08:40:17 -0400 (0:00:00.052) 0:01:29.867 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Tuesday 02 October 2018 08:40:17 -0400 (0:00:00.047) 0:01:29.914 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Tuesday 02 October 2018 08:40:17 -0400 (0:00:00.054) 0:01:29.969 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Tuesday 02 October 2018 08:40:17 -0400 (0:00:00.052) 0:01:30.021 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Tuesday 02 October 2018 08:40:17 -0400 (0:00:00.048) 0:01:30.070 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Tuesday 02 October 2018 08:40:17 -0400 (0:00:00.048) 0:01:30.118 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Tuesday 02 October 2018 08:40:17 -0400 (0:00:00.049) 0:01:30.168 ******* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.421396\", \"end\": \"2018-10-02 12:40:18.131668\", \"rc\": 0, \"start\": \"2018-10-02 12:40:17.710272\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Tuesday 02 October 2018 08:40:18 -0400 (0:00:00.769) 0:01:30.937 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Tuesday 02 October 2018 08:40:18 -0400 (0:00:00.249) 0:01:31.187 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Tuesday 02 October 2018 08:40:18 -0400 (0:00:00.052) 0:01:31.239 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Tuesday 02 October 2018 08:40:18 -0400 (0:00:00.048) 0:01:31.288 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Tuesday 02 October 2018 08:40:18 -0400 (0:00:00.083) 0:01:31.371 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Tuesday 02 October 2018 08:40:18 -0400 (0:00:00.051) 0:01:31.423 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Tuesday 02 October 2018 08:40:18 -0400 (0:00:00.058) 0:01:31.482 ******* ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Tuesday 02 October 2018 08:40:19 -0400 (0:00:00.957) 0:01:32.439 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Tuesday 02 October 2018 08:40:19 -0400 (0:00:00.056) 0:01:32.496 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Tuesday 02 October 2018 08:40:19 -0400 (0:00:00.060) 0:01:32.556 ******* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Tuesday 02 October 2018 08:40:20 -0400 (0:00:00.217) 0:01:32.773 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Tuesday 02 October 2018 08:40:20 -0400 (0:00:00.057) 0:01:32.830 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Tuesday 02 October 2018 08:40:20 -0400 (0:00:00.055) 0:01:32.886 ******* ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Tuesday 02 October 2018 08:40:20 -0400 (0:00:00.253) 0:01:33.139 ******* ", "ok: [controller-0] => {\"changed\": false, \"checksum\": \"d7acef6abeb4e7853e1cf2b7e41f2f58868cad4a\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"a31e326b2b79369b2901aa2d0f318a37\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484020.44-8431852936027/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Tuesday 02 October 2018 08:40:20 -0400 (0:00:00.580) 0:01:33.719 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:2", "Tuesday 02 October 2018 08:40:21 -0400 (0:00:00.052) 0:01:33.772 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd_mgr\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mgr : create mgr directory] *****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:2", "Tuesday 02 October 2018 08:40:21 -0400 (0:00:00.124) 0:01:33.896 ******* ", "ok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10", "Tuesday 02 October 2018 08:40:21 -0400 (0:00:00.254) 0:01:34.150 ******* ", "changed: [controller-0] => (item={u'dest': u'/var/lib/ceph/mgr/ceph-controller-0/keyring', u'name': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"name\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"md5sum\": \"91380060d243fe3cf688ad21a60a8ace\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484021.46-178482665226417/source\", \"state\": \"file\", \"uid\": 167}", "skipping: [controller-0] => (item={u'dest': u'/etc/ceph/ceph.client.admin.keyring', u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"dest\": \"/etc/ceph/ceph.client.admin.keyring\", \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : set mgr key permissions] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:24", "Tuesday 02 October 2018 08:40:21 -0400 (0:00:00.564) 0:01:34.714 ******* ", "ok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"state\": \"file\", \"uid\": 167}", "", "TASK [ceph-mgr : install ceph-mgr package on RedHat or SUSE] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2", "Tuesday 02 October 2018 08:40:22 -0400 (0:00:00.250) 0:01:34.965 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : install ceph mgr for debian] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:9", "Tuesday 02 October 2018 08:40:22 -0400 (0:00:00.054) 0:01:35.020 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:17", "Tuesday 02 October 2018 08:40:22 -0400 (0:00:00.052) 0:01:35.072 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : add ceph-mgr systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:25", "Tuesday 02 October 2018 08:40:22 -0400 (0:00:00.051) 0:01:35.124 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : start and add that the mgr service to the init sequence] ******", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:35", "Tuesday 02 October 2018 08:40:22 -0400 (0:00:00.050) 0:01:35.174 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2", "Tuesday 02 October 2018 08:40:22 -0400 (0:00:00.049) 0:01:35.224 ******* ", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"168504b73edc17939666d0ef559eaab44f0382c8\", \"dest\": \"/etc/systemd/system/ceph-mgr@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"35d5093713655bbf808450ce1bb2b512\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 734, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484022.52-112121441174884/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mgr : systemd start mgr container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:13", "Tuesday 02 October 2018 08:40:23 -0400 (0:00:00.851) 0:01:36.075 ******* ", "changed: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mgr@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dmgr.slice systemd-journald.socket docker.service basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Manager\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro -e CLUSTER=ceph -e CEPH_DAEMON=MGR -e MGR_DASHBOARD=0 --name=ceph-mgr-controller-0 192.168.24.1:8787/rhceph:3-12 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mgr@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mgr@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127792\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127792\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mgr@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmgr.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmgr.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mgr : get enabled modules from ceph-mgr] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:19", "Tuesday 02 October 2018 08:40:23 -0400 (0:00:00.529) 0:01:36.605 ******* ", "changed: [controller-0 -> 192.168.24.10] => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"--format\", \"json\", \"mgr\", \"module\", \"ls\"], \"delta\": \"0:00:00.385691\", \"end\": \"2018-10-02 12:40:24.459499\", \"rc\": 0, \"start\": \"2018-10-02 12:40:24.073808\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\", \"stdout_lines\": [\"\", \"{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\"]}", "", "TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:26", "Tuesday 02 October 2018 08:40:24 -0400 (0:00:00.655) 0:01:37.260 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_ceph_mgr_modules\": {\"disabled_modules\": [\"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"selftest\", \"zabbix\"], \"enabled_modules\": [\"balancer\", \"restful\", \"status\"]}}, \"changed\": false}", "", "TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:32", "Tuesday 02 October 2018 08:40:24 -0400 (0:00:00.086) 0:01:37.347 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_disabled_ceph_mgr_modules\": [\"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"selftest\", \"zabbix\"]}, \"changed\": false}", "", "TASK [ceph-mgr : disable ceph mgr enabled modules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:38", "Tuesday 02 October 2018 08:40:24 -0400 (0:00:00.119) 0:01:37.467 ******* ", "changed: [controller-0 -> 192.168.24.10] => (item=balancer) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"balancer\"], \"delta\": \"0:00:01.212066\", \"end\": \"2018-10-02 12:40:26.244390\", \"item\": \"balancer\", \"rc\": 0, \"start\": \"2018-10-02 12:40:25.032324\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [controller-0 -> 192.168.24.10] => (item=restful) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"restful\"], \"delta\": \"0:00:00.810150\", \"end\": \"2018-10-02 12:40:27.236623\", \"item\": \"restful\", \"rc\": 0, \"start\": \"2018-10-02 12:40:26.426473\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "skipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : add modules to ceph-mgr] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:49", "Tuesday 02 October 2018 08:40:27 -0400 (0:00:02.604) 0:01:40.072 ******* ", "skipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Tuesday 02 October 2018 08:40:27 -0400 (0:00:00.030) 0:01:40.103 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Tuesday 02 October 2018 08:40:27 -0400 (0:00:00.169) 0:01:40.272 ******* ", "ok: [controller-0] => {\"changed\": false, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"mode\": \"0750\", \"owner\": \"root\", \"path\": \"/tmp/restart_mgr_daemon.sh\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Tuesday 02 October 2018 08:40:28 -0400 (0:00:00.566) 0:01:40.839 ******* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Tuesday 02 October 2018 08:40:28 -0400 (0:00:00.172) 0:01:41.012 ******* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Tuesday 02 October 2018 08:40:28 -0400 (0:00:00.135) 0:01:41.147 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph manager install 'Complete'] *************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:129", "Tuesday 02 October 2018 08:40:28 -0400 (0:00:00.103) 0:01:41.251 ******* ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"end\": \"20181002084028Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY [osds] ********************************************************************", "", "TASK [set ceph osd install 'In Progress'] **************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:141", "Tuesday 02 October 2018 08:40:28 -0400 (0:00:00.166) 0:01:41.418 ******* ", "ok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"start\": \"20181002084028Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Tuesday 02 October 2018 08:40:28 -0400 (0:00:00.080) 0:01:41.499 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Tuesday 02 October 2018 08:40:28 -0400 (0:00:00.045) 0:01:41.545 ******* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-osd-ceph-0\"], \"delta\": \"0:00:00.028737\", \"end\": \"2018-10-02 12:40:29.013252\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:28.984515\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.263) 0:01:41.808 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.048) 0:01:41.857 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.043) 0:01:41.900 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.047) 0:01:41.948 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.048) 0:01:41.996 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.046) 0:01:42.043 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.040) 0:01:42.084 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.039) 0:01:42.123 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.040) 0:01:42.164 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.041) 0:01:42.205 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.040) 0:01:42.246 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.043) 0:01:42.290 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.047) 0:01:42.337 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.048) 0:01:42.385 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.048) 0:01:42.434 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.046) 0:01:42.480 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.051) 0:01:42.532 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.046) 0:01:42.579 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.056) 0:01:42.635 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.048) 0:01:42.684 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Tuesday 02 October 2018 08:40:29 -0400 (0:00:00.045) 0:01:42.730 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.044) 0:01:42.774 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.045) 0:01:42.819 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.045) 0:01:42.865 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.053) 0:01:42.918 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.046) 0:01:42.965 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.046) 0:01:43.011 ******* ", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.227) 0:01:43.239 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.070) 0:01:43.310 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.084) 0:01:43.394 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.073) 0:01:43.467 ******* ", "ok: [ceph-0 -> 192.168.24.10] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Tuesday 02 October 2018 08:40:30 -0400 (0:00:00.146) 0:01:43.614 ******* ", "ok: [ceph-0 -> 192.168.24.10] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.336806\", \"end\": \"2018-10-02 12:40:31.399288\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:31.062482\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.15:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.15:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Tuesday 02 October 2018 08:40:31 -0400 (0:00:00.588) 0:01:44.202 ******* ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Tuesday 02 October 2018 08:40:31 -0400 (0:00:00.188) 0:01:44.391 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Tuesday 02 October 2018 08:40:31 -0400 (0:00:00.049) 0:01:44.441 ******* ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Tuesday 02 October 2018 08:40:31 -0400 (0:00:00.183) 0:01:44.624 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"172.17.3.15:6800/79\", \"active_gid\": 4104, \"active_name\": \"controller-0\", \"available\": true, \"available_modules\": [\"balancer\", \"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"restful\", \"selftest\", \"status\", \"zabbix\"], \"epoch\": 7, \"modules\": [\"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-10-02 12:39:39.460029\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"modified\": \"2018-10-02 12:39:39.460029\", \"mons\": [{\"addr\": \"172.17.3.15:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.15:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 1, \"full\": false, \"nearfull\": false, \"num_in_osds\": 0, \"num_osds\": 0, \"num_remapped_pgs\": 0, \"num_up_osds\": 0}}, \"pgmap\": {\"bytes_avail\": 0, \"bytes_total\": 0, \"bytes_used\": 0, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 0, \"num_pools\": 0, \"pgs_by_state\": []}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Tuesday 02 October 2018 08:40:31 -0400 (0:00:00.083) 0:01:44.707 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.076) 0:01:44.784 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.077) 0:01:44.861 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.047) 0:01:44.909 ******* ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 4398e5b0-c63c-11e8-b95a-525400c8bd81 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.190) 0:01:45.100 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.043) 0:01:45.144 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.045) 0:01:45.189 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"mds_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.176) 0:01:45.365 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.040) 0:01:45.406 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.173) 0:01:45.580 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.074) 0:01:45.654 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163", "Tuesday 02 October 2018 08:40:32 -0400 (0:00:00.072) 0:01:45.727 ******* ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.004040\", \"end\": \"2018-10-02 12:40:33.304661\", \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.300621\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}", "ok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.002955\", \"end\": \"2018-10-02 12:40:33.485602\", \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.482647\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}", "ok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.002920\", \"end\": \"2018-10-02 12:40:33.658559\", \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.655639\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}", "ok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.002832\", \"end\": \"2018-10-02 12:40:33.822927\", \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.820095\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}", "ok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.002898\", \"end\": \"2018-10-02 12:40:33.985666\", \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.982768\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173", "Tuesday 02 October 2018 08:40:34 -0400 (0:00:01.055) 0:01:46.782 ******* ", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-10-02 12:40:33.304661', '_ansible_no_log': False, u'stdout': u'/dev/vdb', u'cmd': [u'readlink', u'-f', u'/dev/vdb'], u'rc': 0, 'item': u'/dev/vdb', u'delta': u'0:00:00.004040', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdb', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdb'], u'start': u'2018-10-02 12:40:33.300621', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.004040\", \"end\": \"2018-10-02 12:40:33.304661\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.300621\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-10-02 12:40:33.485602', '_ansible_no_log': False, u'stdout': u'/dev/vdc', u'cmd': [u'readlink', u'-f', u'/dev/vdc'], u'rc': 0, 'item': u'/dev/vdc', u'delta': u'0:00:00.002955', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdc', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdc'], u'start': u'2018-10-02 12:40:33.482647', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.002955\", \"end\": \"2018-10-02 12:40:33.485602\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.482647\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-10-02 12:40:33.658559', '_ansible_no_log': False, u'stdout': u'/dev/vdd', u'cmd': [u'readlink', u'-f', u'/dev/vdd'], u'rc': 0, 'item': u'/dev/vdd', u'delta': u'0:00:00.002920', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdd', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdd'], u'start': u'2018-10-02 12:40:33.655639', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.002920\", \"end\": \"2018-10-02 12:40:33.658559\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.655639\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-10-02 12:40:33.822927', '_ansible_no_log': False, u'stdout': u'/dev/vde', u'cmd': [u'readlink', u'-f', u'/dev/vde'], u'rc': 0, 'item': u'/dev/vde', u'delta': u'0:00:00.002832', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vde', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vde'], u'start': u'2018-10-02 12:40:33.820095', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.002832\", \"end\": \"2018-10-02 12:40:33.822927\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.820095\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-10-02 12:40:33.985666', '_ansible_no_log': False, u'stdout': u'/dev/vdf', u'cmd': [u'readlink', u'-f', u'/dev/vdf'], u'rc': 0, 'item': u'/dev/vdf', u'delta': u'0:00:00.002898', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdf', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdf'], u'start': u'2018-10-02 12:40:33.982768', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.002898\", \"end\": \"2018-10-02 12:40:33.985666\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-10-02 12:40:33.982768\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182", "Tuesday 02 October 2018 08:40:34 -0400 (0:00:00.289) 0:01:47.072 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Tuesday 02 October 2018 08:40:34 -0400 (0:00:00.208) 0:01:47.280 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Tuesday 02 October 2018 08:40:34 -0400 (0:00:00.049) 0:01:47.330 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Tuesday 02 October 2018 08:40:34 -0400 (0:00:00.049) 0:01:47.380 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Tuesday 02 October 2018 08:40:34 -0400 (0:00:00.047) 0:01:47.428 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218", "Tuesday 02 October 2018 08:40:34 -0400 (0:00:00.049) 0:01:47.477 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rgw_hostname] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225", "Tuesday 02 October 2018 08:40:34 -0400 (0:00:00.183) 0:01:47.661 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Tuesday 02 October 2018 08:40:35 -0400 (0:00:00.137) 0:01:47.798 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Tuesday 02 October 2018 08:40:35 -0400 (0:00:00.069) 0:01:47.868 ******* ", "changed: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Tuesday 02 October 2018 08:40:37 -0400 (0:00:02.038) 0:01:49.906 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Tuesday 02 October 2018 08:40:37 -0400 (0:00:00.047) 0:01:49.954 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Tuesday 02 October 2018 08:40:37 -0400 (0:00:00.046) 0:01:50.000 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Tuesday 02 October 2018 08:40:37 -0400 (0:00:00.045) 0:01:50.046 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Tuesday 02 October 2018 08:40:37 -0400 (0:00:00.046) 0:01:50.092 ******* ", "ok: [ceph-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [ceph-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Tuesday 02 October 2018 08:40:37 -0400 (0:00:00.376) 0:01:50.468 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Tuesday 02 October 2018 08:40:37 -0400 (0:00:00.077) 0:01:50.546 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Tuesday 02 October 2018 08:40:37 -0400 (0:00:00.042) 0:01:50.588 ******* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.021809\", \"end\": \"2018-10-02 12:40:38.017706\", \"rc\": 0, \"start\": \"2018-10-02 12:40:37.995897\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 8633870/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 8633870/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Tuesday 02 October 2018 08:40:38 -0400 (0:00:00.222) 0:01:50.810 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Tuesday 02 October 2018 08:40:38 -0400 (0:00:00.078) 0:01:50.888 ******* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-ceph-0\"], \"delta\": \"0:00:00.023339\", \"end\": \"2018-10-02 12:40:38.346811\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:38.323472\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Tuesday 02 October 2018 08:40:38 -0400 (0:00:00.256) 0:01:51.145 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Tuesday 02 October 2018 08:40:38 -0400 (0:00:00.097) 0:01:51.243 ******* ", "ok: [ceph-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Tuesday 02 October 2018 08:40:38 -0400 (0:00:00.157) 0:01:51.400 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Tuesday 02 October 2018 08:40:38 -0400 (0:00:00.096) 0:01:51.497 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Tuesday 02 October 2018 08:40:38 -0400 (0:00:00.104) 0:01:51.601 ******* ", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1538483996.1513722, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"d677a326bd647888546790f10e2cedd45b16b16c\", \"ctime\": 1538483996.1513722, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382517, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.1513722, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1538483996.3323712, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"55ce938694f0ed88cb9c4903bdb60b986ace7379\", \"ctime\": 1538483996.3323712, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382519, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.3323712, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1538483996.5133705, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f28d2d0af61547531ab0fa31ff23aca020f498eb\", \"ctime\": 1538483996.5133705, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 77181311, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.5133705, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1538483996.6923697, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"4ad6235f1694fb6b72596dffe07b7a3347c382b4\", \"ctime\": 1538483996.6923697, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 80259623, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.6923697, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1538483996.870369, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"4d16e08847d6079bcd8caa2adf07e9012cb0f41e\", \"ctime\": 1538483996.870369, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 84314226, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.870369, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1538483997.047368, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"5255ad2e079bcf92a5703629e8cbeb93fa79b47a\", \"ctime\": 1538483997.047368, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 89100520, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483997.047368, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1538484021.4992602, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"ctime\": 1538483998.7743604, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382521, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483998.7743604, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:01.380) 0:01:52.981 ******* ", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483996.1513722, u'block_size': 4096, u'inode': 59382517, u'isgid': False, u'size': 159, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring', u'xusr': False, u'atime': 1538483996.1513722, u'mimetype': u'unknown', u'ctime': 1538483996.1513722, u'isblk': False, u'checksum': u'd677a326bd647888546790f10e2cedd45b16b16c', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1538483996.1513722, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"d677a326bd647888546790f10e2cedd45b16b16c\", \"ctime\": 1538483996.1513722, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382517, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.1513722, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483996.3323712, u'block_size': 4096, u'inode': 59382519, u'isgid': False, u'size': 688, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring', u'xusr': False, u'atime': 1538483996.3323712, u'mimetype': u'unknown', u'ctime': 1538483996.3323712, u'isblk': False, u'checksum': u'55ce938694f0ed88cb9c4903bdb60b986ace7379', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1538483996.3323712, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"55ce938694f0ed88cb9c4903bdb60b986ace7379\", \"ctime\": 1538483996.3323712, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382519, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.3323712, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483996.5133705, u'block_size': 4096, u'inode': 77181311, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring', u'xusr': False, u'atime': 1538483996.5133705, u'mimetype': u'unknown', u'ctime': 1538483996.5133705, u'isblk': False, u'checksum': u'f28d2d0af61547531ab0fa31ff23aca020f498eb', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1538483996.5133705, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f28d2d0af61547531ab0fa31ff23aca020f498eb\", \"ctime\": 1538483996.5133705, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 77181311, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.5133705, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483996.6923697, u'block_size': 4096, u'inode': 80259623, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'xusr': False, u'atime': 1538483996.6923697, u'mimetype': u'unknown', u'ctime': 1538483996.6923697, u'isblk': False, u'checksum': u'4ad6235f1694fb6b72596dffe07b7a3347c382b4', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1538483996.6923697, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"4ad6235f1694fb6b72596dffe07b7a3347c382b4\", \"ctime\": 1538483996.6923697, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 80259623, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.6923697, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483996.870369, u'block_size': 4096, u'inode': 84314226, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring', u'xusr': False, u'atime': 1538483996.870369, u'mimetype': u'unknown', u'ctime': 1538483996.870369, u'isblk': False, u'checksum': u'4d16e08847d6079bcd8caa2adf07e9012cb0f41e', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1538483996.870369, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"4d16e08847d6079bcd8caa2adf07e9012cb0f41e\", \"ctime\": 1538483996.870369, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 84314226, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483996.870369, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483997.047368, u'block_size': 4096, u'inode': 89100520, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'xusr': False, u'atime': 1538483997.047368, u'mimetype': u'unknown', u'ctime': 1538483997.047368, u'isblk': False, u'checksum': u'5255ad2e079bcf92a5703629e8cbeb93fa79b47a', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1538483997.047368, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"5255ad2e079bcf92a5703629e8cbeb93fa79b47a\", \"ctime\": 1538483997.047368, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 89100520, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483997.047368, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1538483998.7743604, u'block_size': 4096, u'inode': 59382521, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1538484021.4992602, u'mimetype': u'unknown', u'ctime': 1538483998.7743604, u'isblk': False, u'checksum': u'8bb7be95a8da65439da12aedf5f2fdd1235025df', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1538484021.4992602, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"8bb7be95a8da65439da12aedf5f2fdd1235025df\", \"ctime\": 1538483998.7743604, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 59382521, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1538483998.7743604, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:00.342) 0:01:53.323 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:00.042) 0:01:53.365 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:00.043) 0:01:53.409 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:00.044) 0:01:53.453 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:00.045) 0:01:53.499 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:00.045) 0:01:53.544 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:00.043) 0:01:53.588 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:00.050) 0:01:53.639 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:00.042) 0:01:53.681 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Tuesday 02 October 2018 08:40:40 -0400 (0:00:00.042) 0:01:53.723 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.048) 0:01:53.772 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.042) 0:01:53.814 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.042) 0:01:53.857 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.050) 0:01:53.908 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.043) 0:01:53.951 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.042) 0:01:53.994 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.046) 0:01:54.041 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.044) 0:01:54.085 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.042) 0:01:54.127 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.047) 0:01:54.175 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.044) 0:01:54.219 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.043) 0:01:54.263 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.043) 0:01:54.306 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.043) 0:01:54.349 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.046) 0:01:54.396 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.047) 0:01:54.443 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.044) 0:01:54.488 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.042) 0:01:54.531 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.044) 0:01:54.576 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Tuesday 02 October 2018 08:40:41 -0400 (0:00:00.046) 0:01:54.622 ******* ", "ok: [ceph-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:13.105539\", \"end\": \"2018-10-02 12:40:55.168399\", \"rc\": 0, \"start\": \"2018-10-02 12:40:42.062860\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Verifying Checksum\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Verifying Checksum\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Verifying Checksum\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Verifying Checksum\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Verifying Checksum\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Verifying Checksum\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:13.347) 0:02:07.970 ******* ", "changed: [ceph-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.025743\", \"end\": \"2018-10-02 12:40:55.425542\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:40:55.399799\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1487bf057dc6ee0e44030b9fda5febe23f8daf3d246e0762b1ec85ae495261ed/diff:/var/lib/docker/overlay2/172b14eff060835530b211895b7380ac50933aecf7a81f4d0bfe61b55da6fd8a/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1487bf057dc6ee0e44030b9fda5febe23f8daf3d246e0762b1ec85ae495261ed/diff:/var/lib/docker/overlay2/172b14eff060835530b211895b7380ac50933aecf7a81f4d0bfe61b55da6fd8a/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/f50db367f90c8bed331db6170be7830a79719f9076b4f3ab588f87f42b8cf883/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.261) 0:02:08.231 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.078) 0:02:08.310 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.045) 0:02:08.356 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.047) 0:02:08.404 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.044) 0:02:08.448 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.045) 0:02:08.494 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.045) 0:02:08.539 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.045) 0:02:08.585 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.051) 0:02:08.636 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.043) 0:02:08.680 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Tuesday 02 October 2018 08:40:55 -0400 (0:00:00.042) 0:02:08.723 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Tuesday 02 October 2018 08:40:56 -0400 (0:00:00.044) 0:02:08.767 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Tuesday 02 October 2018 08:40:56 -0400 (0:00:00.042) 0:02:08.810 ******* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.476108\", \"end\": \"2018-10-02 12:40:56.715032\", \"rc\": 0, \"start\": \"2018-10-02 12:40:56.238924\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Tuesday 02 October 2018 08:40:56 -0400 (0:00:00.706) 0:02:09.516 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Tuesday 02 October 2018 08:40:56 -0400 (0:00:00.196) 0:02:09.712 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Tuesday 02 October 2018 08:40:57 -0400 (0:00:00.046) 0:02:09.759 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Tuesday 02 October 2018 08:40:57 -0400 (0:00:00.045) 0:02:09.804 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Tuesday 02 October 2018 08:40:57 -0400 (0:00:00.178) 0:02:09.982 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Tuesday 02 October 2018 08:40:57 -0400 (0:00:00.045) 0:02:10.028 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Tuesday 02 October 2018 08:40:57 -0400 (0:00:00.054) 0:02:10.082 ******* ", "changed: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Tuesday 02 October 2018 08:40:58 -0400 (0:00:00.994) 0:02:11.077 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Tuesday 02 October 2018 08:40:58 -0400 (0:00:00.049) 0:02:11.127 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Tuesday 02 October 2018 08:40:58 -0400 (0:00:00.053) 0:02:11.180 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Tuesday 02 October 2018 08:40:58 -0400 (0:00:00.173) 0:02:11.354 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Tuesday 02 October 2018 08:40:58 -0400 (0:00:00.052) 0:02:11.406 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Tuesday 02 October 2018 08:40:58 -0400 (0:00:00.046) 0:02:11.453 ******* ", "changed: [ceph-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Tuesday 02 October 2018 08:40:58 -0400 (0:00:00.242) 0:02:11.695 ******* ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for ceph-0", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"2f883daf3398fbd093f10bbdbf556328ece3203e\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"3cdc9cc79dae4f2e11edf0a447f9356d\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1213, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484059.01-13497339424245/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:02.137) 0:02:13.833 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure public_network configured] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:2", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.074) 0:02:13.908 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure cluster_network configured] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:8", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.052) 0:02:13.960 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure journal_size configured] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:15", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.054) 0:02:14.015 ******* ", "ok: [ceph-0] => {", " \"msg\": \"WARNING: journal_size is configured to 512, which is less than 5GB. This is not recommended and can lead to severe issues.\"", "}", "", "TASK [ceph-osd : make sure an osd scenario was chosen] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:23", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.094) 0:02:14.110 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure a valid osd scenario was chosen] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:31", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.049) 0:02:14.159 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify devices have been provided] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:39", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.053) 0:02:14.213 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : check if osd_scenario lvm is supported by the selected ceph version] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:49", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.066) 0:02:14.279 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify lvm_volumes have been provided] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:59", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.049) 0:02:14.328 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the lvm_volumes variable is a list] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:69", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.055) 0:02:14.384 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the devices variable is a list] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:79", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.052) 0:02:14.437 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify dedicated devices have been provided] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:88", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.050) 0:02:14.487 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the dedicated_devices variable is a list] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:98", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.049) 0:02:14.537 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : check if bluestore is supported by the selected ceph version] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:109", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.051) 0:02:14.588 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include system_tuning.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:5", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.048) 0:02:14.637 ******* ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml for ceph-0", "", "TASK [ceph-osd : disable osd directory parsing by updatedb] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:2", "Tuesday 02 October 2018 08:41:01 -0400 (0:00:00.077) 0:02:14.714 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : disable osd directory path in updatedb.conf] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:11", "Tuesday 02 October 2018 08:41:02 -0400 (0:00:00.047) 0:02:14.762 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : create tmpfiles.d directory] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:22", "Tuesday 02 October 2018 08:41:02 -0400 (0:00:00.056) 0:02:14.818 ******* ", "ok: [ceph-0] => {\"changed\": false, \"gid\": 0, \"group\": \"root\", \"mode\": \"0755\", \"owner\": \"root\", \"path\": \"/etc/tmpfiles.d\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 0}", "", "TASK [ceph-osd : disable transparent hugepage] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:33", "Tuesday 02 October 2018 08:41:02 -0400 (0:00:00.343) 0:02:15.162 ******* ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"e000059a4cfd8ce350b13f14305a46eaf99849ba\", \"dest\": \"/etc/tmpfiles.d/ceph_transparent_hugepage.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"21ac872f3aa1fb44b01d4f7ab00a35fc\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 158, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484062.57-141789700165393/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : get default vm.min_free_kbytes] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:45", "Tuesday 02 October 2018 08:41:03 -0400 (0:00:00.632) 0:02:15.794 ******* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"sysctl\", \"-b\", \"vm.min_free_kbytes\"], \"delta\": \"0:00:00.004662\", \"end\": \"2018-10-02 12:41:03.353659\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:41:03.348997\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"67584\", \"stdout_lines\": [\"67584\"]}", "", "TASK [ceph-osd : set_fact vm_min_free_kbytes] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:52", "Tuesday 02 October 2018 08:41:03 -0400 (0:00:00.355) 0:02:16.150 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"vm_min_free_kbytes\": \"67584\"}, \"changed\": false}", "", "TASK [ceph-osd : apply operating system tuning] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:56", "Tuesday 02 October 2018 08:41:03 -0400 (0:00:00.185) 0:02:16.335 ******* ", "changed: [ceph-0] => (item={u'enable': u\"(osd_objectstore == 'bluestore')\", u'name': u'fs.aio-max-nr', u'value': u'1048576'}) => {\"changed\": true, \"item\": {\"enable\": \"(osd_objectstore == 'bluestore')\", \"name\": \"fs.aio-max-nr\", \"value\": \"1048576\"}}", "changed: [ceph-0] => (item={u'name': u'fs.file-max', u'value': 26234859}) => {\"changed\": true, \"item\": {\"name\": \"fs.file-max\", \"value\": 26234859}}", "changed: [ceph-0] => (item={u'name': u'vm.zone_reclaim_mode', u'value': 0}) => {\"changed\": true, \"item\": {\"name\": \"vm.zone_reclaim_mode\", \"value\": 0}}", "changed: [ceph-0] => (item={u'name': u'vm.swappiness', u'value': 10}) => {\"changed\": true, \"item\": {\"name\": \"vm.swappiness\", \"value\": 10}}", "changed: [ceph-0] => (item={u'name': u'vm.min_free_kbytes', u'value': u'67584'}) => {\"changed\": true, \"item\": {\"name\": \"vm.min_free_kbytes\", \"value\": \"67584\"}}", "", "TASK [ceph-osd : install dependencies] *****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:10", "Tuesday 02 October 2018 08:41:04 -0400 (0:00:01.209) 0:02:17.544 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include common.yml] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:18", "Tuesday 02 October 2018 08:41:04 -0400 (0:00:00.139) 0:02:17.684 ******* ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml for ceph-0", "", "TASK [ceph-osd : create bootstrap-osd and osd directories] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:2", "Tuesday 02 October 2018 08:41:05 -0400 (0:00:00.088) 0:02:17.772 ******* ", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "ok: [ceph-0] => (item=/var/lib/ceph/osd/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-osd : copy ceph key(s) if needed] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:15", "Tuesday 02 October 2018 08:41:05 -0400 (0:00:00.398) 0:02:18.171 ******* ", "changed: [ceph-0] => (item={u'name': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"f28d2d0af61547531ab0fa31ff23aca020f498eb\", \"dest\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"name\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\"}, \"md5sum\": \"096130d29629dd16899b5da08c7a169f\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 113, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484065.48-200308777357540/source\", \"state\": \"file\", \"uid\": 167}", "skipping: [ceph-0] => (item={u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:2", "Tuesday 02 October 2018 08:41:05 -0400 (0:00:00.538) 0:02:18.710 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options 'ceph_disk_cli_options'] *******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:11", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.042) 0:02:18.752 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph'] **************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:20", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.052) 0:02:18.805 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore --dmcrypt'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:29", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.049) 0:02:18.855 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --filestore --dmcrypt'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:38", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.046) 0:02:18.901 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --dmcrypt'] ****", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:47", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.048) 0:02:18.950 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e KV_TYPE=etcd -e KV_IP=127.0.0.1 -e KV_PORT=2379'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:56", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.048) 0:02:18.999 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:62", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.043) 0:02:19.042 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"docker_env_args\": \"-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0\"}, \"changed\": false}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=1'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:70", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.079) 0:02:19.122 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:78", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.049) 0:02:19.172 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=1'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:86", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.055) 0:02:19.227 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact devices generate device list when osd_auto_discovery] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:2", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.046) 0:02:19.273 ******* ", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'20971520', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {u'vda1': {u'sectorsize': 512, u'uuid': u'2018-10-02-08-22-43-00', u'links': {u'masters': [], u'labels': [u'config-2'], u'ids': [], u'uuids': [u'2018-10-02-08-22-43-00']}, u'sectors': u'2048', u'start': u'2048', u'holders': [], u'size': u'1.00 MB'}, u'vda2': {u'sectorsize': 512, u'uuid': u'fec224dd-43d4-4761-93fb-772f1b28103d', u'links': {u'masters': [], u'labels': [u'img-rootfs'], u'ids': [], u'uuids': [u'fec224dd-43d4-4761-93fb-772f1b28103d']}, u'sectors': u'20967391', u'start': u'4096', u'holders': [], u'size': u'10.00 GB'}}, u'holders': [], u'size': u'10.00 GB'}, 'key': u'vda'}) => {\"changed\": false, \"item\": {\"key\": \"vda\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {\"vda1\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"config-2\"], \"masters\": [], \"uuids\": [\"2018-10-02-08-22-43-00\"]}, \"sectors\": \"2048\", \"sectorsize\": 512, \"size\": \"1.00 MB\", \"start\": \"2048\", \"uuid\": \"2018-10-02-08-22-43-00\"}, \"vda2\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"img-rootfs\"], \"masters\": [], \"uuids\": [\"fec224dd-43d4-4761-93fb-772f1b28103d\"]}, \"sectors\": \"20967391\", \"sectorsize\": 512, \"size\": \"10.00 GB\", \"start\": \"4096\", \"uuid\": \"fec224dd-43d4-4761-93fb-772f1b28103d\"}}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"20971520\", \"sectorsize\": \"512\", \"size\": \"10.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdc'}) => {\"changed\": false, \"item\": {\"key\": \"vdc\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdb'}) => {\"changed\": false, \"item\": {\"key\": \"vdb\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vde'}) => {\"changed\": false, \"item\": {\"key\": \"vde\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdd'}) => {\"changed\": false, \"item\": {\"key\": \"vdd\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdf'}) => {\"changed\": false, \"item\": {\"key\": \"vdf\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : resolve dedicated device link(s)] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:15", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.099) 0:02:19.372 ******* ", "", "TASK [ceph-osd : set_fact build dedicated_devices from resolved symlinks] ******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:24", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.042) 0:02:19.415 ******* ", "", "TASK [ceph-osd : set_fact build final dedicated_devices list] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:32", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.043) 0:02:19.459 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : read information about the devices] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:29", "Tuesday 02 October 2018 08:41:06 -0400 (0:00:00.044) 0:02:19.503 ******* ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "", "TASK [ceph-osd : check the partition status of the osd disks] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:2", "Tuesday 02 October 2018 08:41:07 -0400 (0:00:01.161) 0:02:20.664 ******* ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007504\", \"end\": \"2018-10-02 12:41:08.112642\", \"failed_when_result\": false, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.105138\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.006561\", \"end\": \"2018-10-02 12:41:08.275692\", \"failed_when_result\": false, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.269131\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006871\", \"end\": \"2018-10-02 12:41:08.427145\", \"failed_when_result\": false, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.420274\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.006604\", \"end\": \"2018-10-02 12:41:08.572767\", \"failed_when_result\": false, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.566163\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.012161\", \"end\": \"2018-10-02 12:41:08.727557\", \"failed_when_result\": false, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.715396\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create gpt disk label] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:11", "Tuesday 02 October 2018 08:41:08 -0400 (0:00:00.861) 0:02:21.526 ******* ", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdb'], u'end': u'2018-10-02 12:41:08.112642', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdb', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdb', u'delta': u'0:00:00.007504', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:08.105138', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdb']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdb\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.012537\", \"end\": \"2018-10-02 12:41:08.971637\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007504\", \"end\": \"2018-10-02 12:41:08.112642\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.105138\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:08.959100\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdc'], u'end': u'2018-10-02 12:41:08.275692', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdc', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdc', u'delta': u'0:00:00.006561', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:08.269131', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdc']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdc\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.007439\", \"end\": \"2018-10-02 12:41:09.141079\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.006561\", \"end\": \"2018-10-02 12:41:08.275692\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.269131\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:09.133640\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdd'], u'end': u'2018-10-02 12:41:08.427145', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdd', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdd', u'delta': u'0:00:00.006871', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:08.420274', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdd']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdd\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.013353\", \"end\": \"2018-10-02 12:41:09.333743\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006871\", \"end\": \"2018-10-02 12:41:08.427145\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.420274\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:09.320390\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vde'], u'end': u'2018-10-02 12:41:08.572767', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vde', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vde', u'delta': u'0:00:00.006604', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:08.566163', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vde']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vde\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008080\", \"end\": \"2018-10-02 12:41:09.510180\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.006604\", \"end\": \"2018-10-02 12:41:08.572767\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.566163\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:09.502100\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdf'], u'end': u'2018-10-02 12:41:08.727557', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdf', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdf', u'delta': u'0:00:00.012161', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:08.715396', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdf']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdf\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.010214\", \"end\": \"2018-10-02 12:41:09.685000\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.012161\", \"end\": \"2018-10-02 12:41:08.727557\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:08.715396\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:09.674786\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : include scenarios/collocated.yml] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:41", "Tuesday 02 October 2018 08:41:09 -0400 (0:00:00.969) 0:02:22.495 ******* ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml for ceph-0", "", "TASK [ceph-osd : prepare ceph containerized osd disk collocated] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5", "Tuesday 02 October 2018 08:41:09 -0400 (0:00:00.091) 0:02:22.586 ******* ", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdb -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdb -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.943702\", \"end\": \"2018-10-02 12:41:16.985486\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:10.041784\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:10'\\n+common_functions.sh:13: log(): echo '2018-10-02 12:41:10 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid fe117fde-832c-4763-a5e3-451d4d10d6a6 /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:fe117fde-832c-4763-a5e3-451d4d10d6a6 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdb\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:91ed2c1d-609c-486f-a066-6419a5472482 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdb1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\\nmount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.pnZHZR with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.pnZHZR\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.pnZHZR\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.pnZHZR\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/ceph_fsid.19078.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/ceph_fsid.19078.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/fsid.19078.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/fsid.19078.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/magic.19078.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/magic.19078.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/journal_uuid.19078.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/journal_uuid.19078.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.pnZHZR/journal -> /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/type.19078.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/type.19078.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.pnZHZR\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.pnZHZR\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:10'\", \"+common_functions.sh:13: log(): echo '2018-10-02 12:41:10 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid fe117fde-832c-4763-a5e3-451d4d10d6a6 /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:fe117fde-832c-4763-a5e3-451d4d10d6a6 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdb\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:91ed2c1d-609c-486f-a066-6419a5472482 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdb1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\", \"mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.pnZHZR with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.pnZHZR\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.pnZHZR\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.pnZHZR\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/ceph_fsid.19078.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/ceph_fsid.19078.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/fsid.19078.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/fsid.19078.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/magic.19078.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/magic.19078.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/journal_uuid.19078.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/journal_uuid.19078.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.pnZHZR/journal -> /dev/disk/by-partuuid/fe117fde-832c-4763-a5e3-451d4d10d6a6\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR/type.19078.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR/type.19078.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.pnZHZR\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.pnZHZR\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.pnZHZR\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.pnZHZR\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-10-02 12:41:10 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-10-02 12:41:10 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-10-02 12:41:10 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-10-02 12:41:10 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdb\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\\n2018-10-02 12:41:10 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdb2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdb1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-10-02 12:41:10 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-10-02 12:41:10 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-10-02 12:41:10 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-10-02 12:41:10 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdb\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\", \"2018-10-02 12:41:10 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdb2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdb1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdc -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdc -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.806459\", \"end\": \"2018-10-02 12:41:23.974937\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:17.168478\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:17'\\n+common_functions.sh:13: log(): echo '2018-10-02 12:41:17 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 8b8ec385-16bc-490b-b98d-385540b0f964 /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:8b8ec385-16bc-490b-b98d-385540b0f964 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdc\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:6eb57385-48f5-4f84-abb9-66bc21d04543 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdc1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\\nmount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.gjpag9 with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.gjpag9\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.gjpag9\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.gjpag9\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/ceph_fsid.19338.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/ceph_fsid.19338.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/fsid.19338.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/fsid.19338.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/magic.19338.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/magic.19338.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/journal_uuid.19338.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/journal_uuid.19338.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.gjpag9/journal -> /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/type.19338.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/type.19338.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.gjpag9\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.gjpag9\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:17'\", \"+common_functions.sh:13: log(): echo '2018-10-02 12:41:17 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 8b8ec385-16bc-490b-b98d-385540b0f964 /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:8b8ec385-16bc-490b-b98d-385540b0f964 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdc\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:6eb57385-48f5-4f84-abb9-66bc21d04543 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdc1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\", \"mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.gjpag9 with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.gjpag9\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.gjpag9\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.gjpag9\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/ceph_fsid.19338.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/ceph_fsid.19338.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/fsid.19338.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/fsid.19338.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/magic.19338.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/magic.19338.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/journal_uuid.19338.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/journal_uuid.19338.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.gjpag9/journal -> /dev/disk/by-partuuid/8b8ec385-16bc-490b-b98d-385540b0f964\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9/type.19338.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9/type.19338.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjpag9\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjpag9\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.gjpag9\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.gjpag9\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-10-02 12:41:17 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-10-02 12:41:17 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-10-02 12:41:17 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-10-02 12:41:17 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdc\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-10-02 12:41:17 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdc2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdc1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-10-02 12:41:17 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-10-02 12:41:17 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-10-02 12:41:17 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-10-02 12:41:17 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdc\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-10-02 12:41:17 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdc2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdc1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdd -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdd -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.754927\", \"end\": \"2018-10-02 12:41:30.908051\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:24.153124\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:24'\\n+common_functions.sh:13: log(): echo '2018-10-02 12:41:24 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid fef6486a-e7cf-4964-b234-b91f87a44ac9 /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:fef6486a-e7cf-4964-b234-b91f87a44ac9 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdd\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:1ecc8cb2-d418-4bbb-9eb1-7f16b4f8d236 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdd1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\\nmount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.Mg8JM3 with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.Mg8JM3\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Mg8JM3\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Mg8JM3\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/ceph_fsid.19596.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/ceph_fsid.19596.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/fsid.19596.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/fsid.19596.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/magic.19596.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/magic.19596.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/journal_uuid.19596.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/journal_uuid.19596.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Mg8JM3/journal -> /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/type.19596.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/type.19596.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.Mg8JM3\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Mg8JM3\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:24'\", \"+common_functions.sh:13: log(): echo '2018-10-02 12:41:24 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid fef6486a-e7cf-4964-b234-b91f87a44ac9 /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:fef6486a-e7cf-4964-b234-b91f87a44ac9 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdd\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:1ecc8cb2-d418-4bbb-9eb1-7f16b4f8d236 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdd1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\", \"mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.Mg8JM3 with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.Mg8JM3\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Mg8JM3\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Mg8JM3\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/ceph_fsid.19596.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/ceph_fsid.19596.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/fsid.19596.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/fsid.19596.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/magic.19596.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/magic.19596.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/journal_uuid.19596.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/journal_uuid.19596.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Mg8JM3/journal -> /dev/disk/by-partuuid/fef6486a-e7cf-4964-b234-b91f87a44ac9\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3/type.19596.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3/type.19596.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Mg8JM3\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Mg8JM3\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.Mg8JM3\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Mg8JM3\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-10-02 12:41:24 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-10-02 12:41:24 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-10-02 12:41:24 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-10-02 12:41:24 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdd\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-10-02 12:41:24 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdd2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdd1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-10-02 12:41:24 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-10-02 12:41:24 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-10-02 12:41:24 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-10-02 12:41:24 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdd\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-10-02 12:41:24 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdd2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdd1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vde -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vde -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.615148\", \"end\": \"2018-10-02 12:41:37.692462\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:31.077314\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:31'\\n+common_functions.sh:13: log(): echo '2018-10-02 12:41:31 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid ef76da91-06ef-48f2-ac83-44e036954486 /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_type: Will colocate journal with data on /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:ef76da91-06ef-48f2-ac83-44e036954486 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vde\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:0293c581-8b59-4892-ba50-68ac61ecb1c6 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vde1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\\nmount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.KWOrR0 with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.KWOrR0\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.KWOrR0\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.KWOrR0\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/ceph_fsid.19855.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/ceph_fsid.19855.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/fsid.19855.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/fsid.19855.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/magic.19855.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/magic.19855.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/journal_uuid.19855.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/journal_uuid.19855.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.KWOrR0/journal -> /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/type.19855.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/type.19855.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.KWOrR0\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.KWOrR0\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:31'\", \"+common_functions.sh:13: log(): echo '2018-10-02 12:41:31 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid ef76da91-06ef-48f2-ac83-44e036954486 /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:ef76da91-06ef-48f2-ac83-44e036954486 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vde\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:0293c581-8b59-4892-ba50-68ac61ecb1c6 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vde1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\", \"mount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.KWOrR0 with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.KWOrR0\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.KWOrR0\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.KWOrR0\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/ceph_fsid.19855.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/ceph_fsid.19855.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/fsid.19855.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/fsid.19855.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/magic.19855.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/magic.19855.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/journal_uuid.19855.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/journal_uuid.19855.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.KWOrR0/journal -> /dev/disk/by-partuuid/ef76da91-06ef-48f2-ac83-44e036954486\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0/type.19855.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0/type.19855.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KWOrR0\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KWOrR0\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.KWOrR0\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.KWOrR0\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-10-02 12:41:31 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-10-02 12:41:31 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-10-02 12:41:31 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-10-02 12:41:31 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vde\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.e5J3z0HHLE' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-10-02 12:41:31 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vde2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vde1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-10-02 12:41:31 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-10-02 12:41:31 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-10-02 12:41:31 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-10-02 12:41:31 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vde\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.e5J3z0HHLE' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-10-02 12:41:31 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vde2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vde1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdf -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdf -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:07.002085\", \"end\": \"2018-10-02 12:41:44.891974\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-10-02 12:41:37.889889\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:38'\\n+common_functions.sh:13: log(): echo '2018-10-02 12:41:38 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 3ae85ed2-2af1-464d-87a1-0d5f98798701 /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:3ae85ed2-2af1-464d-87a1-0d5f98798701 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdf\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:84fd63db-59ea-4e51-953d-be7355a12f83 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdf1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\\nmount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.G0typD with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.G0typD\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.G0typD\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.G0typD\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/ceph_fsid.20115.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/ceph_fsid.20115.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/fsid.20115.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/fsid.20115.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/magic.20115.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/magic.20115.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/journal_uuid.20115.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/journal_uuid.20115.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.G0typD/journal -> /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/type.20115.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/type.20115.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.G0typD\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.G0typD\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-10-02 12:41:38'\", \"+common_functions.sh:13: log(): echo '2018-10-02 12:41:38 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 3ae85ed2-2af1-464d-87a1-0d5f98798701 /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:3ae85ed2-2af1-464d-87a1-0d5f98798701 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdf\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:84fd63db-59ea-4e51-953d-be7355a12f83 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdf1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\", \"mount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.G0typD with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.G0typD\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.G0typD\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.G0typD\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/ceph_fsid.20115.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/ceph_fsid.20115.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/fsid.20115.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/fsid.20115.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/magic.20115.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/magic.20115.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/journal_uuid.20115.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/journal_uuid.20115.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.G0typD/journal -> /dev/disk/by-partuuid/3ae85ed2-2af1-464d-87a1-0d5f98798701\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD/type.20115.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD/type.20115.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.G0typD\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.G0typD\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.G0typD\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.G0typD\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-10-02 12:41:38 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-10-02 12:41:38 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-10-02 12:41:38 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-10-02 12:41:38 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdf\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.e5J3z0HHLE' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.9bLn7X0Tn2' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-10-02 12:41:38 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdf2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdf1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-10-02 12:41:38 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-10-02 12:41:38 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-10-02 12:41:38 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-10-02 12:41:38 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdf\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.m3i2xlTmuA' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.3Ji4gFiPGj' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.KQLVhJaQiu' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.e5J3z0HHLE' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.9bLn7X0Tn2' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-10-02 12:41:38 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdf2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdf1' from root:disk to ceph:ceph\"]}", "", "TASK [ceph-osd : automatic prepare ceph containerized osd disk collocated] *****", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:30", "Tuesday 02 October 2018 08:41:44 -0400 (0:00:35.127) 0:02:57.714 ******* ", "skipping: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"item\": \"/dev/vdb\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"item\": \"/dev/vdc\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"item\": \"/dev/vdd\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"item\": \"/dev/vde\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"item\": \"/dev/vdf\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : manually prepare ceph \"filestore\" non-containerized osd disk(s) with collocated osd data and journal] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53", "Tuesday 02 October 2018 08:41:45 -0400 (0:00:00.069) 0:02:57.784 ******* ", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include scenarios/non-collocated.yml] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:48", "Tuesday 02 October 2018 08:41:45 -0400 (0:00:00.100) 0:02:57.885 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include scenarios/lvm.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:56", "Tuesday 02 October 2018 08:41:45 -0400 (0:00:00.044) 0:02:57.929 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include activate_osds.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:64", "Tuesday 02 October 2018 08:41:45 -0400 (0:00:00.039) 0:02:57.969 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include start_osds.yml] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:72", "Tuesday 02 October 2018 08:41:45 -0400 (0:00:00.040) 0:02:58.009 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include docker/main.yml] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:80", "Tuesday 02 October 2018 08:41:45 -0400 (0:00:00.042) 0:02:58.051 ******* ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml for ceph-0", "", "TASK [ceph-osd : include start_docker_osd.yml] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml:2", "Tuesday 02 October 2018 08:41:45 -0400 (0:00:00.081) 0:02:58.133 ******* ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml for ceph-0", "", "TASK [ceph-osd : umount ceph disk (if on openstack)] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:4", "Tuesday 02 October 2018 08:41:45 -0400 (0:00:00.061) 0:02:58.194 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : test if the container image has the disk_list function] *******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:13", "Tuesday 02 October 2018 08:41:45 -0400 (0:00:00.044) 0:02:58.239 ******* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint=stat\", \"192.168.24.1:8787/rhceph:3-12\", \"disk_list.sh\"], \"delta\": \"0:00:00.363866\", \"end\": \"2018-10-02 12:41:46.173239\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:41:45.809373\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: 'disk_list.sh'\\n Size: 4074 \\tBlocks: 8 IO Block: 4096 regular file\\nDevice: 2ah/42d\\tInode: 10557679 Links: 1\\nAccess: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\\nAccess: 2018-08-06 22:27:40.000000000 +0000\\nModify: 2018-08-06 22:27:40.000000000 +0000\\nChange: 2018-10-02 12:40:47.417875170 +0000\\n Birth: -\", \"stdout_lines\": [\" File: 'disk_list.sh'\", \" Size: 4074 \\tBlocks: 8 IO Block: 4096 regular file\", \"Device: 2ah/42d\\tInode: 10557679 Links: 1\", \"Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\", \"Access: 2018-08-06 22:27:40.000000000 +0000\", \"Modify: 2018-08-06 22:27:40.000000000 +0000\", \"Change: 2018-10-02 12:40:47.417875170 +0000\", \" Birth: -\"]}", "", "TASK [ceph-osd : generate ceph osd docker run script] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:19", "Tuesday 02 October 2018 08:41:46 -0400 (0:00:00.739) 0:02:58.978 ******* ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"5542e950125b3dbd25e146575a148538f90dc2a6\", \"dest\": \"/usr/share/ceph-osd-run.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"81913dc490826e0e8f21ed305bd0867e\", \"mode\": \"0744\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:usr_t:s0\", \"size\": 964, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484106.28-253602508062744/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:30", "Tuesday 02 October 2018 08:41:47 -0400 (0:00:00.963) 0:02:59.942 ******* ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"b7abfb86a4af8d6e54d349965cae96bf9b995c49\", \"dest\": \"/etc/systemd/system/ceph-osd@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8a53f95e6590750e7c4807589dd5864c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 496, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484107.23-64473267333508/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : systemd start osd container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:41", "Tuesday 02 October 2018 08:41:48 -0400 (0:00:00.830) 0:03:00.772 ******* ", "changed: [ceph-0] => (item=/dev/vdb) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdb\", \"name\": \"ceph-osd@vdb\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service systemd-journald.socket basic.target system-ceph\\\\x5cx2dosd.slice\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdb.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14903\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14903\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdb.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vdc) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdc\", \"name\": \"ceph-osd@vdc\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice systemd-journald.socket docker.service basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdc.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14903\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14903\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdc.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vdd) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdd\", \"name\": \"ceph-osd@vdd\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice basic.target docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdd.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14903\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14903\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdd.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vde) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vde\", \"name\": \"ceph-osd@vde\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket basic.target system-ceph\\\\x5cx2dosd.slice docker.service\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vde.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14903\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14903\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vde.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vdf) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdf\", \"name\": \"ceph-osd@vdf\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice systemd-journald.socket docker.service basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdf.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14903\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14903\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdf.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-osd : set_fact openstack_keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:87", "Tuesday 02 October 2018 08:41:51 -0400 (0:00:03.162) 0:03:03.934 ******* ", "skipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": false, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact keys - override keys_tmp with keys] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:95", "Tuesday 02 October 2018 08:41:51 -0400 (0:00:00.069) 0:03:04.004 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : wait for all osd to be up] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:2", "Tuesday 02 October 2018 08:41:51 -0400 (0:00:00.077) 0:03:04.082 ******* ", "changed: [ceph-0 -> 192.168.24.10] => {\"attempts\": 1, \"changed\": true, \"cmd\": \"test \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_osds\\\"])')\\\" = \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_up_osds\\\"])')\\\"\", \"delta\": \"0:00:00.797558\", \"end\": \"2018-10-02 12:41:52.398515\", \"rc\": 0, \"start\": \"2018-10-02 12:41:51.600957\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : list existing pool(s)] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12", "Tuesday 02 October 2018 08:41:52 -0400 (0:00:01.150) 0:03:05.232 ******* ", "changed: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.398746\", \"end\": \"2018-10-02 12:41:53.125999\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:52.727253\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.10] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.323270\", \"end\": \"2018-10-02 12:41:53.665322\", \"failed_when_result\": false, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:53.342052\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.313361\", \"end\": \"2018-10-02 12:41:54.175960\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:53.862599\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.334943\", \"end\": \"2018-10-02 12:41:54.713587\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:54.378644\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.322561\", \"end\": \"2018-10-02 12:41:55.241928\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:54.919367\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : set_fact rule_name before luminous] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21", "Tuesday 02 October 2018 08:41:55 -0400 (0:00:02.813) 0:03:08.046 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact rule_name from luminous] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:28", "Tuesday 02 October 2018 08:41:55 -0400 (0:00:00.051) 0:03:08.097 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"rule_name\": \"replicated_rule\"}, \"changed\": false}", "", "TASK [ceph-osd : create openstack pool(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:35", "Tuesday 02 October 2018 08:41:55 -0400 (0:00:00.135) 0:03:08.233 ******* ", "ok: [ceph-0 -> 192.168.24.10] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'images'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'images', u'size'], u'end': u'2018-10-02 12:41:53.125999', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.398746', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'images'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:52.727253', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"images\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.041837\", \"end\": \"2018-10-02 12:41:56.791957\", \"item\": [{\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.10\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.398746\", \"end\": \"2018-10-02 12:41:53.125999\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:52.727253\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-10-02 12:41:55.750120\", \"stderr\": \"pool 'images' created\", \"stderr_lines\": [\"pool 'images' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.10] => (item=[{u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'metrics'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'metrics', u'size'], u'end': u'2018-10-02 12:41:53.665322', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.323270', '_ansible_item_label': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'metrics'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:53.342052', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"metrics\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.061873\", \"end\": \"2018-10-02 12:41:58.086015\", \"item\": [{\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.10\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.323270\", \"end\": \"2018-10-02 12:41:53.665322\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:53.342052\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-10-02 12:41:57.024142\", \"stderr\": \"pool 'metrics' created\", \"stderr_lines\": [\"pool 'metrics' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.10] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'backups'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'backups', u'size'], u'end': u'2018-10-02 12:41:54.175960', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.313361', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'backups'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:53.862599', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"backups\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.012535\", \"end\": \"2018-10-02 12:41:59.314032\", \"item\": [{\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.10\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.313361\", \"end\": \"2018-10-02 12:41:54.175960\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:53.862599\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-10-02 12:41:58.301497\", \"stderr\": \"pool 'backups' created\", \"stderr_lines\": [\"pool 'backups' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.10] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'vms'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'vms', u'size'], u'end': u'2018-10-02 12:41:54.713587', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.334943', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'vms'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:54.378644', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"vms\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.006895\", \"end\": \"2018-10-02 12:42:00.545270\", \"item\": [{\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.10\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.334943\", \"end\": \"2018-10-02 12:41:54.713587\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:54.378644\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-10-02 12:41:59.538375\", \"stderr\": \"pool 'vms' created\", \"stderr_lines\": [\"pool 'vms' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.10] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'volumes'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'volumes', u'size'], u'end': u'2018-10-02 12:41:55.241928', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.322561', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'volumes'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-10-02 12:41:54.919367', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"volumes\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.163136\", \"end\": \"2018-10-02 12:42:01.934968\", \"item\": [{\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.10\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.322561\", \"end\": \"2018-10-02 12:41:55.241928\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-10-02 12:41:54.919367\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-10-02 12:42:00.771832\", \"stderr\": \"pool 'volumes' created\", \"stderr_lines\": [\"pool 'volumes' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : assign application to pool(s)] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:55", "Tuesday 02 October 2018 08:42:02 -0400 (0:00:06.557) 0:03:14.790 ******* ", "ok: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"images\", \"rbd\"], \"delta\": \"0:00:00.647796\", \"end\": \"2018-10-02 12:42:02.943465\", \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:02.295669\", \"stderr\": \"enabled application 'rbd' on pool 'images'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.10] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"metrics\", \"openstack_gnocchi\"], \"delta\": \"0:00:00.820625\", \"end\": \"2018-10-02 12:42:03.967637\", \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:03.147012\", \"stderr\": \"enabled application 'openstack_gnocchi' on pool 'metrics'\", \"stderr_lines\": [\"enabled application 'openstack_gnocchi' on pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"backups\", \"rbd\"], \"delta\": \"0:00:00.763479\", \"end\": \"2018-10-02 12:42:04.954489\", \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:04.191010\", \"stderr\": \"enabled application 'rbd' on pool 'backups'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"vms\", \"rbd\"], \"delta\": \"0:00:00.824001\", \"end\": \"2018-10-02 12:42:05.997335\", \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:05.173334\", \"stderr\": \"enabled application 'rbd' on pool 'vms'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.10] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"volumes\", \"rbd\"], \"delta\": \"0:00:00.747992\", \"end\": \"2018-10-02 12:42:06.956609\", \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:06.208617\", \"stderr\": \"enabled application 'rbd' on pool 'volumes'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create openstack cephx key(s)] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:64", "Tuesday 02 October 2018 08:42:07 -0400 (0:00:05.019) 0:03:19.810 ******* ", "changed: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.796814\", \"end\": \"2018-10-02 12:42:08.282332\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:07.485518\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.manila.keyring\"], \"delta\": \"0:00:00.851542\", \"end\": \"2018-10-02 12:42:09.348227\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:08.496685\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.838943\", \"end\": \"2018-10-02 12:42:10.388446\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:09.549503\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : fetch openstack cephx key(s)] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:77", "Tuesday 02 October 2018 08:42:10 -0400 (0:00:03.414) 0:03:23.224 ******* ", "changed: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": true, \"checksum\": \"64fff1482317a1d8364a6da8e84d29db06535fbc\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.client.openstack.keyring\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"md5sum\": \"dd3eb3ded7a35db5efca563964aa5ef4\", \"remote_checksum\": \"64fff1482317a1d8364a6da8e84d29db06535fbc\", \"remote_md5sum\": null}", "changed: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": true, \"checksum\": \"5b562922a577010a9622d5ab7f25776e35e06a5e\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.client.manila.keyring\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"md5sum\": \"f8ebf4d94e396034a17e0a1209fd2c2c\", \"remote_checksum\": \"5b562922a577010a9622d5ab7f25776e35e06a5e\", \"remote_md5sum\": null}", "changed: [ceph-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": true, \"checksum\": \"17aec2a4c51a0277cc4caf052ea82bb5a542ffb8\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/4398e5b0-c63c-11e8-b95a-525400c8bd81/etc/ceph/ceph.client.radosgw.keyring\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"md5sum\": \"44072b3418cd73c910a4c8ab96e42054\", \"remote_checksum\": \"17aec2a4c51a0277cc4caf052ea82bb5a542ffb8\", \"remote_md5sum\": null}", "", "TASK [ceph-osd : copy to other mons the openstack cephx key(s)] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:85", "Tuesday 02 October 2018 08:42:11 -0400 (0:00:00.615) 0:03:23.840 ******* ", "changed: [ceph-0 -> 192.168.24.10] => (item=[u'controller-0', {u'name': u'client.openstack', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}]) => {\"changed\": true, \"checksum\": \"64fff1482317a1d8364a6da8e84d29db06535fbc\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.openstack.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 253, \"state\": \"file\", \"uid\": 167}", "changed: [ceph-0 -> 192.168.24.10] => (item=[u'controller-0', {u'name': u'client.manila', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}]) => {\"changed\": true, \"checksum\": \"5b562922a577010a9622d5ab7f25776e35e06a5e\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.manila.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 268, \"state\": \"file\", \"uid\": 167}", "changed: [ceph-0 -> 192.168.24.10] => (item=[u'controller-0', {u'name': u'client.radosgw', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}]) => {\"changed\": true, \"checksum\": \"17aec2a4c51a0277cc4caf052ea82bb5a542ffb8\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 134, \"state\": \"file\", \"uid\": 167}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Tuesday 02 October 2018 08:42:12 -0400 (0:00:01.226) 0:03:25.067 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Tuesday 02 October 2018 08:42:12 -0400 (0:00:00.190) 0:03:25.258 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:12 -0400 (0:00:00.048) 0:03:25.306 ******* ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Tuesday 02 October 2018 08:42:12 -0400 (0:00:00.087) 0:03:25.393 ******* ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Tuesday 02 October 2018 08:42:12 -0400 (0:00:00.090) 0:03:25.484 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Tuesday 02 October 2018 08:42:12 -0400 (0:00:00.201) 0:03:25.685 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Tuesday 02 October 2018 08:42:13 -0400 (0:00:00.195) 0:03:25.881 ******* ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"6631c34a339c45ab1081b01015293e952e36893e\", \"dest\": \"/tmp/restart_osd_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"308c89936c25e77f74e78c1e4905ee1a\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 3081, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484133.34-274789297423069/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:13 -0400 (0:00:00.713) 0:03:26.594 ******* ", "skipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Tuesday 02 October 2018 08:42:13 -0400 (0:00:00.077) 0:03:26.672 ******* ", "skipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.085) 0:03:26.757 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.176) 0:03:26.934 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.184) 0:03:27.118 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.046) 0:03:27.165 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.053) 0:03:27.218 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.055) 0:03:27.274 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.161) 0:03:27.435 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.075) 0:03:27.511 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.050) 0:03:27.561 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.059) 0:03:27.621 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.049) 0:03:27.670 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Tuesday 02 October 2018 08:42:14 -0400 (0:00:00.064) 0:03:27.734 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.065) 0:03:27.800 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.044) 0:03:27.844 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.053) 0:03:27.898 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.049) 0:03:27.947 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.065) 0:03:28.012 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.065) 0:03:28.078 ******* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.041) 0:03:28.120 ******* ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.078) 0:03:28.198 ******* ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.077) 0:03:28.276 ******* ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph osd install 'Complete'] *****************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:156", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.093) 0:03:28.369 ******* ", "ok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"end\": \"20181002084215Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY [mdss] ********************************************************************", "skipping: no hosts matched", "", "PLAY [rgws] ********************************************************************", "skipping: no hosts matched", "", "PLAY [nfss] ********************************************************************", "skipping: no hosts matched", "", "PLAY [rbdmirrors] **************************************************************", "skipping: no hosts matched", "", "PLAY [restapis] ****************************************************************", "skipping: no hosts matched", "", "PLAY [clients] *****************************************************************", "", "TASK [set ceph client install 'In Progress'] ***********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:307", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.154) 0:03:28.524 ******* ", "ok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"start\": \"20181002084215Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.083) 0:03:28.608 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.047) 0:03:28.656 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Tuesday 02 October 2018 08:42:15 -0400 (0:00:00.049) 0:03:28.705 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.053) 0:03:28.759 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.046) 0:03:28.805 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:28.851 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.048) 0:03:28.900 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.049) 0:03:28.949 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.046) 0:03:28.995 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.053) 0:03:29.049 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.048) 0:03:29.097 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.047) 0:03:29.145 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.048) 0:03:29.194 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.240 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.046) 0:03:29.286 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.046) 0:03:29.333 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.047) 0:03:29.380 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.426 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.471 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.516 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.044) 0:03:29.561 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.047) 0:03:29.608 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.653 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.045) 0:03:29.698 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Tuesday 02 October 2018 08:42:16 -0400 (0:00:00.044) 0:03:29.743 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Tuesday 02 October 2018 08:42:17 -0400 (0:00:00.046) 0:03:29.789 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Tuesday 02 October 2018 08:42:17 -0400 (0:00:00.046) 0:03:29.836 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Tuesday 02 October 2018 08:42:17 -0400 (0:00:00.047) 0:03:29.884 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Tuesday 02 October 2018 08:42:17 -0400 (0:00:00.056) 0:03:29.941 ******* ", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Tuesday 02 October 2018 08:42:17 -0400 (0:00:00.241) 0:03:30.183 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Tuesday 02 October 2018 08:42:17 -0400 (0:00:00.075) 0:03:30.259 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Tuesday 02 October 2018 08:42:17 -0400 (0:00:00.078) 0:03:30.337 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Tuesday 02 October 2018 08:42:17 -0400 (0:00:00.075) 0:03:30.413 ******* ", "ok: [compute-0 -> 192.168.24.10] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Tuesday 02 October 2018 08:42:17 -0400 (0:00:00.151) 0:03:30.564 ******* ", "ok: [compute-0 -> 192.168.24.10] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.392824\", \"end\": \"2018-10-02 12:42:18.410684\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:42:18.017860\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":18,\\\"num_osds\\\":5,\\\"num_up_osds\\\":5,\\\"num_in_osds\\\":5,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[{\\\"state_name\\\":\\\"active+clean\\\",\\\"count\\\":160}],\\\"num_pgs\\\":160,\\\"num_pools\\\":5,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":565141504,\\\"bytes_avail\\\":55748530176,\\\"bytes_total\\\":56313671680},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.15:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"4398e5b0-c63c-11e8-b95a-525400c8bd81\\\",\\\"modified\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"created\\\":\\\"2018-10-02 12:39:39.460029\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.15:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.15:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":18,\\\"num_osds\\\":5,\\\"num_up_osds\\\":5,\\\"num_in_osds\\\":5,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[{\\\"state_name\\\":\\\"active+clean\\\",\\\"count\\\":160}],\\\"num_pgs\\\":160,\\\"num_pools\\\":5,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":565141504,\\\"bytes_avail\\\":55748530176,\\\"bytes_total\\\":56313671680},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.15:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Tuesday 02 October 2018 08:42:18 -0400 (0:00:00.650) 0:03:31.215 ******* ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Tuesday 02 October 2018 08:42:18 -0400 (0:00:00.192) 0:03:31.407 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Tuesday 02 October 2018 08:42:18 -0400 (0:00:00.053) 0:03:31.461 ******* ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Tuesday 02 October 2018 08:42:18 -0400 (0:00:00.192) 0:03:31.654 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"172.17.3.15:6800/79\", \"active_gid\": 4104, \"active_name\": \"controller-0\", \"available\": true, \"available_modules\": [\"balancer\", \"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"restful\", \"selftest\", \"status\", \"zabbix\"], \"epoch\": 7, \"modules\": [\"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-10-02 12:39:39.460029\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\", \"modified\": \"2018-10-02 12:39:39.460029\", \"mons\": [{\"addr\": \"172.17.3.15:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.15:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 18, \"full\": false, \"nearfull\": false, \"num_in_osds\": 5, \"num_osds\": 5, \"num_remapped_pgs\": 0, \"num_up_osds\": 5}}, \"pgmap\": {\"bytes_avail\": 55748530176, \"bytes_total\": 56313671680, \"bytes_used\": 565141504, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 160, \"num_pools\": 5, \"pgs_by_state\": [{\"count\": 160, \"state_name\": \"active+clean\"}]}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Tuesday 02 October 2018 08:42:18 -0400 (0:00:00.085) 0:03:31.740 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"fsid\": \"4398e5b0-c63c-11e8-b95a-525400c8bd81\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88", "Tuesday 02 October 2018 08:42:19 -0400 (0:00:00.075) 0:03:31.816 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92", "Tuesday 02 October 2018 08:42:19 -0400 (0:00:00.190) 0:03:32.007 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103", "Tuesday 02 October 2018 08:42:19 -0400 (0:00:00.052) 0:03:32.059 ******* ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 4398e5b0-c63c-11e8-b95a-525400c8bd81 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112", "Tuesday 02 October 2018 08:42:19 -0400 (0:00:00.204) 0:03:32.263 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124", "Tuesday 02 October 2018 08:42:19 -0400 (0:00:00.047) 0:03:32.311 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130", "Tuesday 02 October 2018 08:42:19 -0400 (0:00:00.043) 0:03:32.354 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"mds_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136", "Tuesday 02 October 2018 08:42:19 -0400 (0:00:00.204) 0:03:32.559 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Tuesday 02 October 2018 08:42:19 -0400 (0:00:00.046) 0:03:32.605 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Tuesday 02 October 2018 08:42:20 -0400 (0:00:00.201) 0:03:32.807 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Tuesday 02 October 2018 08:42:20 -0400 (0:00:00.205) 0:03:33.013 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163", "Tuesday 02 October 2018 08:42:20 -0400 (0:00:00.192) 0:03:33.206 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173", "Tuesday 02 October 2018 08:42:20 -0400 (0:00:00.059) 0:03:33.266 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182", "Tuesday 02 October 2018 08:42:20 -0400 (0:00:00.181) 0:03:33.447 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Tuesday 02 October 2018 08:42:20 -0400 (0:00:00.047) 0:03:33.495 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Tuesday 02 October 2018 08:42:20 -0400 (0:00:00.051) 0:03:33.546 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Tuesday 02 October 2018 08:42:20 -0400 (0:00:00.049) 0:03:33.596 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Tuesday 02 October 2018 08:42:20 -0400 (0:00:00.051) 0:03:33.647 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218", "Tuesday 02 October 2018 08:42:20 -0400 (0:00:00.052) 0:03:33.700 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rgw_hostname] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225", "Tuesday 02 October 2018 08:42:21 -0400 (0:00:00.082) 0:03:33.782 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Tuesday 02 October 2018 08:42:21 -0400 (0:00:00.047) 0:03:33.830 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Tuesday 02 October 2018 08:42:21 -0400 (0:00:00.074) 0:03:33.905 ******* ", "changed: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Tuesday 02 October 2018 08:42:23 -0400 (0:00:02.115) 0:03:36.020 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Tuesday 02 October 2018 08:42:23 -0400 (0:00:00.053) 0:03:36.074 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Tuesday 02 October 2018 08:42:23 -0400 (0:00:00.053) 0:03:36.127 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Tuesday 02 October 2018 08:42:23 -0400 (0:00:00.053) 0:03:36.181 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Tuesday 02 October 2018 08:42:23 -0400 (0:00:00.053) 0:03:36.234 ******* ", "ok: [compute-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [compute-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Tuesday 02 October 2018 08:42:23 -0400 (0:00:00.451) 0:03:36.686 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Tuesday 02 October 2018 08:42:24 -0400 (0:00:00.083) 0:03:36.769 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Tuesday 02 October 2018 08:42:24 -0400 (0:00:00.043) 0:03:36.813 ******* ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.025770\", \"end\": \"2018-10-02 12:42:24.227023\", \"rc\": 0, \"start\": \"2018-10-02 12:42:24.201253\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 8633870/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 8633870/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Tuesday 02 October 2018 08:42:24 -0400 (0:00:00.264) 0:03:37.077 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Tuesday 02 October 2018 08:42:24 -0400 (0:00:00.081) 0:03:37.159 ******* ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-compute-0\"], \"delta\": \"0:00:00.022877\", \"end\": \"2018-10-02 12:42:24.572834\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:42:24.549957\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Tuesday 02 October 2018 08:42:24 -0400 (0:00:00.261) 0:03:37.421 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Tuesday 02 October 2018 08:42:24 -0400 (0:00:00.058) 0:03:37.479 ******* ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Tuesday 02 October 2018 08:42:24 -0400 (0:00:00.066) 0:03:37.546 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Tuesday 02 October 2018 08:42:24 -0400 (0:00:00.056) 0:03:37.602 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Tuesday 02 October 2018 08:42:24 -0400 (0:00:00.064) 0:03:37.666 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Tuesday 02 October 2018 08:42:24 -0400 (0:00:00.051) 0:03:37.718 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.049) 0:03:37.768 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.043) 0:03:37.811 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.041) 0:03:37.853 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.044) 0:03:37.897 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.058) 0:03:37.956 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.053) 0:03:38.010 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.050) 0:03:38.060 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.050) 0:03:38.110 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.048) 0:03:38.158 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.048) 0:03:38.207 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.059) 0:03:38.267 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.050) 0:03:38.317 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.046) 0:03:38.363 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.046) 0:03:38.410 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.055) 0:03:38.465 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.054) 0:03:38.520 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.062) 0:03:38.583 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.064) 0:03:38.648 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Tuesday 02 October 2018 08:42:25 -0400 (0:00:00.055) 0:03:38.704 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.053) 0:03:38.757 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.050) 0:03:38.808 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.050) 0:03:38.859 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.060) 0:03:38.920 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.051) 0:03:38.971 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.051) 0:03:39.022 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.049) 0:03:39.072 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.048) 0:03:39.120 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.050) 0:03:39.171 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.057) 0:03:39.228 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Tuesday 02 October 2018 08:42:26 -0400 (0:00:00.052) 0:03:39.281 ******* ", "ok: [compute-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:14.082760\", \"end\": \"2018-10-02 12:42:40.766740\", \"rc\": 0, \"start\": \"2018-10-02 12:42:26.683980\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Verifying Checksum\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Verifying Checksum\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Verifying Checksum\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Verifying Checksum\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Tuesday 02 October 2018 08:42:40 -0400 (0:00:14.341) 0:03:53.623 ******* ", "changed: [compute-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.027387\", \"end\": \"2018-10-02 12:42:41.058482\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-10-02 12:42:41.031095\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/b8d0a98064d555daef74d7b023d00f17de29f7cfd26a4f21a98a3ca39f66136f/diff:/var/lib/docker/overlay2/3dafe6d2bc5c1dbf6269c88efd0920f9a59be9445b59cbf5f08594f915afa247/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/b8d0a98064d555daef74d7b023d00f17de29f7cfd26a4f21a98a3ca39f66136f/diff:/var/lib/docker/overlay2/3dafe6d2bc5c1dbf6269c88efd0920f9a59be9445b59cbf5f08594f915afa247/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/efd929f060a7c20e0d9a3ba5035ffec8cc278002e690e4fb1aa58c640fba2dea/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.301) 0:03:53.925 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.094) 0:03:54.019 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.051) 0:03:54.070 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.060) 0:03:54.131 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.048) 0:03:54.179 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.050) 0:03:54.230 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.052) 0:03:54.282 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.046) 0:03:54.329 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.052) 0:03:54.381 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.047) 0:03:54.429 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.056) 0:03:54.486 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.051) 0:03:54.537 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Tuesday 02 October 2018 08:42:41 -0400 (0:00:00.054) 0:03:54.591 ******* ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.448590\", \"end\": \"2018-10-02 12:42:42.421062\", \"rc\": 0, \"start\": \"2018-10-02 12:42:41.972472\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Tuesday 02 October 2018 08:42:42 -0400 (0:00:00.683) 0:03:55.275 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Tuesday 02 October 2018 08:42:42 -0400 (0:00:00.186) 0:03:55.461 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Tuesday 02 October 2018 08:42:42 -0400 (0:00:00.049) 0:03:55.511 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Tuesday 02 October 2018 08:42:42 -0400 (0:00:00.049) 0:03:55.560 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Tuesday 02 October 2018 08:42:43 -0400 (0:00:00.195) 0:03:55.756 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Tuesday 02 October 2018 08:42:43 -0400 (0:00:00.049) 0:03:55.806 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Tuesday 02 October 2018 08:42:43 -0400 (0:00:00.056) 0:03:55.862 ******* ", "changed: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Tuesday 02 October 2018 08:42:44 -0400 (0:00:01.032) 0:03:56.895 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Tuesday 02 October 2018 08:42:44 -0400 (0:00:00.048) 0:03:56.944 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Tuesday 02 October 2018 08:42:44 -0400 (0:00:00.054) 0:03:56.998 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Tuesday 02 October 2018 08:42:44 -0400 (0:00:00.174) 0:03:57.173 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Tuesday 02 October 2018 08:42:44 -0400 (0:00:00.052) 0:03:57.225 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Tuesday 02 October 2018 08:42:44 -0400 (0:00:00.046) 0:03:57.272 ******* ", "changed: [compute-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Tuesday 02 October 2018 08:42:44 -0400 (0:00:00.235) 0:03:57.507 ******* ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for compute-0", "changed: [compute-0] => {\"changed\": true, \"checksum\": \"55b1f0577e67c2bfbbd30f40df9ea9b389d9639b\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"61444335eb3c3ef3239f2dde50381d2b\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1320, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484164.81-36246248334490/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Tuesday 02 October 2018 08:42:46 -0400 (0:00:02.175) 0:03:59.683 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : copy ceph admin keyring when non containerized deployment] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml:2", "Tuesday 02 October 2018 08:42:46 -0400 (0:00:00.054) 0:03:59.738 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : set_fact keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:2", "Tuesday 02 October 2018 08:42:47 -0400 (0:00:00.044) 0:03:59.782 ******* ", "skipping: [compute-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": false, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : set_fact keys - override keys_tmp with keys] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:9", "Tuesday 02 October 2018 08:42:47 -0400 (0:00:00.069) 0:03:59.852 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : create filtered clients group] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:20", "Tuesday 02 October 2018 08:42:47 -0400 (0:00:00.046) 0:03:59.899 ******* ", "creating host via 'add_host': hostname=compute-0", "changed: [compute-0] => (item=compute-0) => {\"add_host\": {\"groups\": [\"_filtered_clients\"], \"host_name\": \"compute-0\", \"host_vars\": {}}, \"changed\": true, \"item\": \"compute-0\"}", "", "TASK [ceph-client : run a dummy container (sleep 300) from where we can create pool(s)/key(s)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:28", "Tuesday 02 October 2018 08:42:47 -0400 (0:00:00.116) 0:04:00.015 ******* ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"-d\", \"-v\", \"/etc/ceph:/etc/ceph:z\", \"--name\", \"ceph-create-keys\", \"--entrypoint=sleep\", \"192.168.24.1:8787/rhceph:3-12\", \"300\"], \"delta\": \"0:00:00.233408\", \"end\": \"2018-10-02 12:42:47.633663\", \"rc\": 0, \"start\": \"2018-10-02 12:42:47.400255\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"28f3fe1dc230246e335c7a5a364dba44e8d407c872d491398c86c6d79a098f3e\", \"stdout_lines\": [\"28f3fe1dc230246e335c7a5a364dba44e8d407c872d491398c86c6d79a098f3e\"]}", "", "TASK [ceph-client : set_fact delegated_node] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:43", "Tuesday 02 October 2018 08:42:47 -0400 (0:00:00.468) 0:04:00.484 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"delegated_node\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-client : set_fact condition_copy_admin_key] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:47", "Tuesday 02 October 2018 08:42:47 -0400 (0:00:00.073) 0:04:00.557 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"condition_copy_admin_key\": true}, \"changed\": false}", "", "TASK [ceph-client : set_fact docker_exec_cmd] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:51", "Tuesday 02 October 2018 08:42:47 -0400 (0:00:00.077) 0:04:00.635 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0 \"}, \"changed\": false}", "", "TASK [ceph-client : create cephx key(s)] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:57", "Tuesday 02 October 2018 08:42:48 -0400 (0:00:00.137) 0:04:00.772 ******* ", "changed: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.910400\", \"end\": \"2018-10-02 12:42:49.145072\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:48.234672\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.manila.keyring\"], \"delta\": \"0:00:00.869185\", \"end\": \"2018-10-02 12:42:50.299889\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:49.430704\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.928657\", \"end\": \"2018-10-02 12:42:51.412385\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-10-02 12:42:50.483728\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-client : slurp client cephx key(s)] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:75", "Tuesday 02 October 2018 08:42:51 -0400 (0:00:03.469) 0:04:04.241 ******* ", "ok: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'name': u'client.openstack'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBWjMrM2JrL1NtTy9nK0psWXZCWDQxUT09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}", "ok: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'name': u'client.manila'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBTDRpd3lRNnZBOWx1Z1VEdEI1ZmFpZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}", "ok: [compute-0 -> 192.168.24.10] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'name': u'client.radosgw'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCa1lMTmJBQUFBQUJBQWlJaTY4WUVnZWtPenBCa0pTU2lONGc9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}", "", "TASK [ceph-client : list existing pool(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:87", "Tuesday 02 October 2018 08:42:52 -0400 (0:00:00.606) 0:04:04.848 ******* ", "", "TASK [ceph-client : create ceph pool(s)] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:99", "Tuesday 02 October 2018 08:42:52 -0400 (0:00:00.055) 0:04:04.903 ******* ", "", "TASK [ceph-client : get client cephx keys] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:122", "Tuesday 02 October 2018 08:42:52 -0400 (0:00:00.048) 0:04:04.952 ******* ", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBWjMrM2JrL1NtTy9nK0psWXZCWDQxUT09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.openstack.keyring', 'item': {u'mode': u'0600', u'name': u'client.openstack', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.openstack.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.openstack', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}}) => {\"changed\": true, \"checksum\": \"64fff1482317a1d8364a6da8e84d29db06535fbc\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBWjMrM2JrL1NtTy9nK0psWXZCWDQxUT09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.openstack.keyring\"}}, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}, \"md5sum\": \"dd3eb3ded7a35db5efca563964aa5ef4\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 253, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484172.41-235840774940833/source\", \"state\": \"file\", \"uid\": 167}", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBTDRpd3lRNnZBOWx1Z1VEdEI1ZmFpZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.manila.keyring', 'item': {u'mode': u'0600', u'name': u'client.manila', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.manila.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.manila', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}}) => {\"changed\": true, \"checksum\": \"5b562922a577010a9622d5ab7f25776e35e06a5e\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUJrWUxOYkFBQUFBQkFBTDRpd3lRNnZBOWx1Z1VEdEI1ZmFpZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.manila.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}, \"md5sum\": \"f8ebf4d94e396034a17e0a1209fd2c2c\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 268, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484172.89-139378585418982/source\", \"state\": \"file\", \"uid\": 167}", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCa1lMTmJBQUFBQUJBQWlJaTY4WUVnZWtPenBCa0pTU2lONGc9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=', 'failed': False, u'source': u'/etc/ceph/ceph.client.radosgw.keyring', 'item': {u'mode': u'0600', u'name': u'client.radosgw', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.radosgw.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.10'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.radosgw', u'mode': u'0600', u'key': u'AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}}) => {\"changed\": true, \"checksum\": \"17aec2a4c51a0277cc4caf052ea82bb5a542ffb8\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCa1lMTmJBQUFBQUJBQWlJaTY4WUVnZWtPenBCa0pTU2lONGc9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.radosgw.keyring\"}}, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}, \"md5sum\": \"44072b3418cd73c910a4c8ab96e42054\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 134, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1538484173.36-204390493911564/source\", \"state\": \"file\", \"uid\": 167}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Tuesday 02 October 2018 08:42:53 -0400 (0:00:01.624) 0:04:06.576 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.191) 0:04:06.767 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.048) 0:04:06.816 ******* ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.084) 0:04:06.900 ******* ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.199) 0:04:07.099 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.157) 0:04:07.257 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.074) 0:04:07.332 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.046) 0:04:07.378 ******* ", "skipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.085) 0:04:07.464 ******* ", "skipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.085) 0:04:07.549 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.076) 0:04:07.626 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Tuesday 02 October 2018 08:42:54 -0400 (0:00:00.077) 0:04:07.704 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.049) 0:04:07.754 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.059) 0:04:07.813 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.056) 0:04:07.869 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.076) 0:04:07.946 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.077) 0:04:08.023 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.048) 0:04:08.071 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.061) 0:04:08.133 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.058) 0:04:08.192 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.073) 0:04:08.266 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.077) 0:04:08.343 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.046) 0:04:08.389 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.054) 0:04:08.444 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.057) 0:04:08.501 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.079) 0:04:08.580 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.076) 0:04:08.657 ******* ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Tuesday 02 October 2018 08:42:55 -0400 (0:00:00.047) 0:04:08.705 ******* ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.086) 0:04:08.791 ******* ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.082) 0:04:08.873 ******* ", "ok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph client install 'Complete'] **************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:324", "Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.105) 0:04:08.979 ******* ", "ok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"end\": \"20181002084256Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=88 changed=19 unreachable=0 failed=0 ", "compute-0 : ok=56 changed=8 unreachable=0 failed=0 ", "controller-0 : ok=121 changed=22 unreachable=0 failed=0 ", "", "", "INSTALLER STATUS ***************************************************************", "Install Ceph Monitor : Complete (0:01:02)", "Install Ceph Manager : Complete (0:00:25)", "Install Ceph OSD : Complete (0:01:47)", "Install Ceph Client : Complete (0:00:41)", "", "Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.067) 0:04:09.046 ******* ", "=============================================================================== "]} >2018-10-02 08:42:56,751 p=1004 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-10-02 08:42:56,751 p=1004 u=mistral | Tuesday 02 October 2018 08:42:56 -0400 (0:04:13.153) 0:14:09.484 ******* >2018-10-02 08:42:56,771 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:56,786 p=1004 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-10-02 08:42:56,786 p=1004 u=mistral | Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.034) 0:14:09.519 ******* >2018-10-02 08:42:56,803 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:56,816 p=1004 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-10-02 08:42:56,816 p=1004 u=mistral | Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.030) 0:14:09.549 ******* >2018-10-02 08:42:56,838 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:56,850 p=1004 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-10-02 08:42:56,850 p=1004 u=mistral | Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.034) 0:14:09.584 ******* >2018-10-02 08:42:56,871 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:56,884 p=1004 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 08:42:56,884 p=1004 u=mistral | Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.033) 0:14:09.618 ******* >2018-10-02 08:42:56,905 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:56,918 p=1004 u=mistral | TASK [Create temp file for prepare parameter] ********************************** >2018-10-02 08:42:56,919 p=1004 u=mistral | Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.034) 0:14:09.652 ******* >2018-10-02 08:42:56,939 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:56,955 p=1004 u=mistral | TASK [Create temp file for role data] ****************************************** >2018-10-02 08:42:56,955 p=1004 u=mistral | Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.035) 0:14:09.688 ******* >2018-10-02 08:42:56,976 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:56,989 p=1004 u=mistral | TASK [Write ContainerImagePrepare parameter file] ****************************** >2018-10-02 08:42:56,990 p=1004 u=mistral | Tuesday 02 October 2018 08:42:56 -0400 (0:00:00.034) 0:14:09.723 ******* >2018-10-02 08:42:57,015 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,029 p=1004 u=mistral | TASK [Write role data file] **************************************************** >2018-10-02 08:42:57,029 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.039) 0:14:09.763 ******* >2018-10-02 08:42:57,054 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,069 p=1004 u=mistral | TASK [Run tripleo-container-image-prepare] ************************************* >2018-10-02 08:42:57,069 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.039) 0:14:09.802 ******* >2018-10-02 08:42:57,089 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,103 p=1004 u=mistral | TASK [Delete param file] ******************************************************* >2018-10-02 08:42:57,104 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.034) 0:14:09.837 ******* >2018-10-02 08:42:57,125 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,139 p=1004 u=mistral | TASK [Delete role file] ******************************************************** >2018-10-02 08:42:57,140 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.036) 0:14:09.873 ******* >2018-10-02 08:42:57,165 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,177 p=1004 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-10-02 08:42:57,177 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.037) 0:14:09.911 ******* >2018-10-02 08:42:57,198 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,213 p=1004 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-10-02 08:42:57,213 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.035) 0:14:09.947 ******* >2018-10-02 08:42:57,234 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,249 p=1004 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-10-02 08:42:57,249 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.035) 0:14:09.982 ******* >2018-10-02 08:42:57,269 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,284 p=1004 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-10-02 08:42:57,284 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.034) 0:14:10.017 ******* >2018-10-02 08:42:57,303 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,309 p=1004 u=mistral | PLAY [Overcloud deploy step tasks for 2] *************************************** >2018-10-02 08:42:57,316 p=1004 u=mistral | PLAY [Overcloud common deploy step tasks 2] ************************************ >2018-10-02 08:42:57,346 p=1004 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-10-02 08:42:57,347 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.062) 0:14:10.080 ******* >2018-10-02 08:42:57,378 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,405 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,417 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,444 p=1004 u=mistral | TASK [Delete existing /var/lib/tripleo-config/check-mode directory for check mode] *** >2018-10-02 08:42:57,444 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.097) 0:14:10.178 ******* >2018-10-02 08:42:57,479 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,508 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,523 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,549 p=1004 u=mistral | TASK [Create /var/lib/tripleo-config/check-mode directory for check mode] ****** >2018-10-02 08:42:57,549 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.104) 0:14:10.282 ******* >2018-10-02 08:42:57,580 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,605 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,620 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,644 p=1004 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-10-02 08:42:57,644 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.095) 0:14:10.378 ******* >2018-10-02 08:42:57,675 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,702 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,717 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,742 p=1004 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 08:42:57,742 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.097) 0:14:10.475 ******* >2018-10-02 08:42:57,772 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,802 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,815 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:57,843 p=1004 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 08:42:57,843 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.101) 0:14:10.577 ******* >2018-10-02 08:42:57,876 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:42:57,902 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:42:57,916 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:42:57,941 p=1004 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-10-02 08:42:57,941 p=1004 u=mistral | Tuesday 02 October 2018 08:42:57 -0400 (0:00:00.097) 0:14:10.674 ******* >2018-10-02 08:42:57,981 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,008 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,020 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,048 p=1004 u=mistral | TASK [Delete existing /var/lib/docker-puppet/check-mode for check mode] ******** >2018-10-02 08:42:58,048 p=1004 u=mistral | Tuesday 02 October 2018 08:42:58 -0400 (0:00:00.107) 0:14:10.782 ******* >2018-10-02 08:42:58,082 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,113 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,134 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,162 p=1004 u=mistral | TASK [Create /var/lib/docker-puppet/check-mode for check mode] ***************** >2018-10-02 08:42:58,162 p=1004 u=mistral | Tuesday 02 October 2018 08:42:58 -0400 (0:00:00.113) 0:14:10.895 ******* >2018-10-02 08:42:58,193 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,221 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,235 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,262 p=1004 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-10-02 08:42:58,262 p=1004 u=mistral | Tuesday 02 October 2018 08:42:58 -0400 (0:00:00.100) 0:14:10.995 ******* >2018-10-02 08:42:58,294 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,323 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,340 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,378 p=1004 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 08:42:58,378 p=1004 u=mistral | Tuesday 02 October 2018 08:42:58 -0400 (0:00:00.116) 0:14:11.111 ******* >2018-10-02 08:42:58,426 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,458 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,474 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,502 p=1004 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 08:42:58,502 p=1004 u=mistral | Tuesday 02 October 2018 08:42:58 -0400 (0:00:00.124) 0:14:11.235 ******* >2018-10-02 08:42:58,537 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:42:58,614 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:42:58,634 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:42:58,662 p=1004 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-10-02 08:42:58,662 p=1004 u=mistral | Tuesday 02 October 2018 08:42:58 -0400 (0:00:00.159) 0:14:11.395 ******* >2018-10-02 08:42:58,696 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,723 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,737 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,764 p=1004 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-10-02 08:42:58,764 p=1004 u=mistral | Tuesday 02 October 2018 08:42:58 -0400 (0:00:00.102) 0:14:11.497 ******* >2018-10-02 08:42:58,797 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,825 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,838 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,866 p=1004 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-10-02 08:42:58,866 p=1004 u=mistral | Tuesday 02 October 2018 08:42:58 -0400 (0:00:00.101) 0:14:11.599 ******* >2018-10-02 08:42:58,927 p=1004 u=mistral | skipping: [controller-0] => (item=create_swift_secret.sh) => {"changed": false, "item": ["create_swift_secret.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,929 p=1004 u=mistral | skipping: [controller-0] => (item=docker_puppet_apply.sh) => {"changed": false, "item": ["docker_puppet_apply.sh", {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,930 p=1004 u=mistral | skipping: [controller-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": false, "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,943 p=1004 u=mistral | skipping: [controller-0] => (item=nova_api_discover_hosts.sh) => {"changed": false, "item": ["nova_api_discover_hosts.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,946 p=1004 u=mistral | skipping: [controller-0] => (item=nova_api_ensure_default_cell.sh) => {"changed": false, "item": ["nova_api_ensure_default_cell.sh", {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,958 p=1004 u=mistral | skipping: [controller-0] => (item=set_swift_keymaster_key_id.sh) => {"changed": false, "item": ["set_swift_keymaster_key_id.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,980 p=1004 u=mistral | skipping: [compute-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": false, "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:58,987 p=1004 u=mistral | skipping: [compute-0] => (item=nova_statedir_ownership.py) => {"changed": false, "item": ["nova_statedir_ownership.py", {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,014 p=1004 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-10-02 08:42:59,014 p=1004 u=mistral | Tuesday 02 October 2018 08:42:59 -0400 (0:00:00.148) 0:14:11.747 ******* >2018-10-02 08:42:59,049 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,083 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,085 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,086 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,087 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,088 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,089 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,092 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,092 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,093 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,093 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,096 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,103 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,103 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,106 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,112 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,117 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,124 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,130 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,135 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,136 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,164 p=1004 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-10-02 08:42:59,164 p=1004 u=mistral | Tuesday 02 October 2018 08:42:59 -0400 (0:00:00.149) 0:14:11.897 ******* >2018-10-02 08:42:59,202 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,235 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,249 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:42:59,276 p=1004 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-10-02 08:42:59,276 p=1004 u=mistral | Tuesday 02 October 2018 08:42:59 -0400 (0:00:00.111) 0:14:12.009 ******* >2018-10-02 08:42:59,309 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,336 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,354 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,384 p=1004 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-10-02 08:42:59,384 p=1004 u=mistral | Tuesday 02 October 2018 08:42:59 -0400 (0:00:00.108) 0:14:12.118 ******* >2018-10-02 08:42:59,445 p=1004 u=mistral | skipping: [ceph-0] => (item=step_1) => {"changed": false, "item": ["step_1", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,448 p=1004 u=mistral | skipping: [ceph-0] => (item=step_2) => {"changed": false, "item": ["step_2", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,452 p=1004 u=mistral | skipping: [ceph-0] => (item=step_3) => {"changed": false, "item": ["step_3", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,457 p=1004 u=mistral | skipping: [controller-0] => (item=step_1) => {"changed": false, "item": ["step_1", {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=mG0FjSjrDN8mWwf9YJSsEJGuQ", "DB_ROOT_PASSWORD=5BSzxzKG9a"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=fbxKGjRmnA14UIbGdAmW"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,460 p=1004 u=mistral | skipping: [ceph-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,467 p=1004 u=mistral | skipping: [controller-0] => (item=step_2) => {"changed": false, "item": ["step_2", {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,469 p=1004 u=mistral | skipping: [ceph-0] => (item=step_5) => {"changed": false, "item": ["step_5", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,472 p=1004 u=mistral | skipping: [ceph-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,473 p=1004 u=mistral | skipping: [compute-0] => (item=step_1) => {"changed": false, "item": ["step_1", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,479 p=1004 u=mistral | skipping: [controller-0] => (item=step_3) => {"changed": false, "item": ["step_3", {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "Q4TKZfrksKpvC1QXOQA8ciL7S"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "user": "root", "volumes": ["/srv/node:/srv/node"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,481 p=1004 u=mistral | skipping: [compute-0] => (item=step_2) => {"changed": false, "item": ["step_2", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,490 p=1004 u=mistral | skipping: [controller-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,494 p=1004 u=mistral | skipping: [compute-0] => (item=step_3) => {"changed": false, "item": ["step_3", {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,498 p=1004 u=mistral | skipping: [compute-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '4398e5b0-c63c-11e8-b95a-525400c8bd81' --base64 'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,502 p=1004 u=mistral | skipping: [controller-0] => (item=step_5) => {"changed": false, "item": ["step_5", {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_api_online_migrations": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db online_data_migrations'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}, "nova_online_migrations": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db online_data_migrations'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,503 p=1004 u=mistral | skipping: [compute-0] => (item=step_5) => {"changed": false, "item": ["step_5", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,504 p=1004 u=mistral | skipping: [controller-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,514 p=1004 u=mistral | skipping: [compute-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,540 p=1004 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-10-02 08:42:59,540 p=1004 u=mistral | Tuesday 02 October 2018 08:42:59 -0400 (0:00:00.155) 0:14:12.273 ******* >2018-10-02 08:42:59,571 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,596 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,609 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,634 p=1004 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-10-02 08:42:59,634 p=1004 u=mistral | Tuesday 02 October 2018 08:42:59 -0400 (0:00:00.093) 0:14:12.367 ******* >2018-10-02 08:42:59,693 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,740 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_compute.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_compute.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,747 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,758 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,760 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,765 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova-migration-target.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova-migration-target.json", {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,772 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_compute.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_compute.json", {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,778 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_libvirt.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_libvirt.json", {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,783 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_virtlogd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_virtlogd.json", {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,846 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,852 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_evaluator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_evaluator.json", {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,858 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_listener.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_listener.json", {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,863 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_notifier.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_notifier.json", {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,870 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_central.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_central.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,876 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_notification.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_notification.json", {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,882 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,889 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,894 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_backup.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_backup.json", {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,900 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_scheduler.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_scheduler.json", {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,906 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_volume.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_volume.json", {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,912 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/clustercheck.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/clustercheck.json", {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,918 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/glance_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/glance_api.json", {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,925 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/glance_api_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/glance_api_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,930 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,935 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_db_sync.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_db_sync.json", {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,942 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_metricd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_metricd.json", {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,948 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_statsd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_statsd.json", {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,953 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/haproxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/haproxy.json", {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,960 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,965 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cfn.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api_cfn.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,971 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,977 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_engine.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_engine.json", {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,984 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/horizon.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/horizon.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,990 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:42:59,996 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/keystone.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/keystone.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,002 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/keystone_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/keystone_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,007 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,014 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/mysql.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/mysql.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,020 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_api.json", {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,025 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_dhcp.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_dhcp.json", {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,032 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_l3_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_l3_agent.json", {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,037 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_metadata_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_metadata_agent.json", {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,044 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,049 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_server_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_server_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,056 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,061 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,067 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_conductor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_conductor.json", {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,074 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_consoleauth.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_consoleauth.json", {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,079 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_metadata.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_metadata.json", {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,086 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_placement.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_placement.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,091 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_scheduler.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_scheduler.json", {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,098 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_vnc_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_vnc_proxy.json", {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "root:nova", "path": "/etc/pki/tls/private/novnc_proxy.key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,104 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/panko_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/panko_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,111 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/rabbitmq.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/rabbitmq.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,118 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/redis.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/redis.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,123 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/redis_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/redis_tls_proxy.json", {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"optional": true, "owner": "root:root", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "root:root", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,130 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/sahara-api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/sahara-api.json", {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,135 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/sahara-engine.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/sahara-engine.json", {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,141 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_auditor.json", {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,147 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_reaper.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_reaper.json", {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,153 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_replicator.json", {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,158 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_server.json", {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,165 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_auditor.json", {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,170 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_replicator.json", {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,176 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_server.json", {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,181 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_updater.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_updater.json", {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,187 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_auditor.json", {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,194 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_expirer.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_expirer.json", {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,199 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_replicator.json", {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,205 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_server.json", {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,211 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_updater.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_updater.json", {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,217 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_proxy.json", {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,223 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,229 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_rsync.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_rsync.json", {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,264 p=1004 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-10-02 08:43:00,264 p=1004 u=mistral | Tuesday 02 October 2018 08:43:00 -0400 (0:00:00.630) 0:14:12.997 ******* >2018-10-02 08:43:00,278 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:43:00,306 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:43:00,334 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:43:00,363 p=1004 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-10-02 08:43:00,363 p=1004 u=mistral | Tuesday 02 October 2018 08:43:00 -0400 (0:00:00.099) 0:14:13.097 ******* >2018-10-02 08:43:00,425 p=1004 u=mistral | skipping: [controller-0] => (item=step_3) => {"changed": false, "item": ["step_3", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,426 p=1004 u=mistral | skipping: [controller-0] => (item=step_4) => {"changed": false, "item": ["step_4", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "config_volume": "cinder_init_tasks", "puppet_tags": "cinder_config,cinder_type,file,concat,file_line", "step_config": "include ::tripleo::profile::base::cinder::api", "volumes": ["/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro"]}]], "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,469 p=1004 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-10-02 08:43:00,469 p=1004 u=mistral | Tuesday 02 October 2018 08:43:00 -0400 (0:00:00.105) 0:14:13.202 ******* >2018-10-02 08:43:00,503 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,532 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,547 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,574 p=1004 u=mistral | TASK [Check for /etc/puppet/check-mode directory for check mode] *************** >2018-10-02 08:43:00,575 p=1004 u=mistral | Tuesday 02 October 2018 08:43:00 -0400 (0:00:00.105) 0:14:13.308 ******* >2018-10-02 08:43:00,607 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,635 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,645 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,672 p=1004 u=mistral | TASK [Create /etc/puppet/check-mode/hieradata directory for check mode] ******** >2018-10-02 08:43:00,673 p=1004 u=mistral | Tuesday 02 October 2018 08:43:00 -0400 (0:00:00.098) 0:14:13.406 ******* >2018-10-02 08:43:00,705 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,734 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,751 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:00,782 p=1004 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-10-02 08:43:00,782 p=1004 u=mistral | Tuesday 02 October 2018 08:43:00 -0400 (0:00:00.109) 0:14:13.516 ******* >2018-10-02 08:43:01,392 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538484180.86-270443795413867/source", "state": "file", "uid": 0} >2018-10-02 08:43:01,399 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538484180.83-17288480866279/source", "state": "file", "uid": 0} >2018-10-02 08:43:01,431 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538484180.89-174319054362841/source", "state": "file", "uid": 0} >2018-10-02 08:43:01,456 p=1004 u=mistral | TASK [Create puppet check-mode files if they don't exist for check mode] ******* >2018-10-02 08:43:01,456 p=1004 u=mistral | Tuesday 02 October 2018 08:43:01 -0400 (0:00:00.673) 0:14:14.189 ******* >2018-10-02 08:43:01,485 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:01,510 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:01,520 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:01,546 p=1004 u=mistral | TASK [Run puppet host configuration for step 2] ******************************** >2018-10-02 08:43:01,546 p=1004 u=mistral | Tuesday 02 October 2018 08:43:01 -0400 (0:00:00.090) 0:14:14.280 ******* >2018-10-02 08:43:11,573 p=1004 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:43:12,305 p=1004 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:43:16,709 p=1004 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:43:16,734 p=1004 u=mistral | TASK [Debug output for task: Run puppet host configuration for step 2] ********* >2018-10-02 08:43:16,735 p=1004 u=mistral | Tuesday 02 October 2018 08:43:16 -0400 (0:00:15.188) 0:14:29.468 ******* >2018-10-02 08:43:16,850 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.14 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller2]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/seltype: seltype changed 'locale_t' to 'etc_t'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 4.06 seconds", > "Changes:", > " Total: 4", > "Events:", > " Success: 4", > "Resources:", > " Corrective change: 1", > " Total: 216", > " Out of sync: 4", > " Changed: 4", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " File line: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Augeas: 0.02", > " Firewall: 0.02", > " File: 0.14", > " Service: 0.28", > " Package: 0.37", > " Pcmk property: 0.38", > " Exec: 0.82", > " Pcmk resource default: 1.11", > " Last run: 1538484196", > " Config retrieval: 3.66", > " Total: 6.82", > " Filebucket: 0.00", > "Version:", > " Config: 1538484188", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:43:16,874 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.77 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage2]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/seltype: seltype changed 'locale_t' to 'etc_t'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.16 seconds", > "Changes:", > " Total: 3", > "Events:", > " Success: 3", > "Resources:", > " Corrective change: 1", > " Total: 134", > " Out of sync: 3", > " Changed: 3", > "Time:", > " Concat file: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl: 0.00", > " Firewall: 0.00", > " Sysctl runtime: 0.01", > " Augeas: 0.01", > " File: 0.10", > " Service: 0.11", > " Exec: 0.20", > " Package: 0.24", > " Last run: 1538484191", > " Config retrieval: 2.08", > " Total: 2.77", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1538484188", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:43:16,895 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.95 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute2]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/seltype: seltype changed 'locale_t' to 'etc_t'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.36 seconds", > "Changes:", > " Total: 3", > "Events:", > " Success: 3", > "Resources:", > " Corrective change: 1", > " Total: 140", > " Out of sync: 3", > " Changed: 3", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.05", > " Service: 0.12", > " Exec: 0.21", > " Package: 0.24", > " Last run: 1538484191", > " Config retrieval: 2.32", > " Total: 2.97", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1538484188", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:43:16,923 p=1004 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 2] ***************** >2018-10-02 08:43:16,923 p=1004 u=mistral | Tuesday 02 October 2018 08:43:16 -0400 (0:00:00.188) 0:14:29.656 ******* >2018-10-02 08:43:16,954 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:43:16,979 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:43:16,995 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:43:17,024 p=1004 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (generate config) during step 2] *** >2018-10-02 08:43:17,024 p=1004 u=mistral | Tuesday 02 October 2018 08:43:17 -0400 (0:00:00.100) 0:14:29.757 ******* >2018-10-02 08:43:17,098 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:43:17,124 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:43:17,136 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:43:17,160 p=1004 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 08:43:17,160 p=1004 u=mistral | Tuesday 02 October 2018 08:43:17 -0400 (0:00:00.136) 0:14:29.893 ******* >2018-10-02 08:43:17,190 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:17,215 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:17,229 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:43:17,254 p=1004 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 08:43:17,255 p=1004 u=mistral | Tuesday 02 October 2018 08:43:17 -0400 (0:00:00.094) 0:14:29.988 ******* >2018-10-02 08:43:17,285 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:43:17,311 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:43:17,324 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:43:17,352 p=1004 u=mistral | TASK [Start containers for step 2] ********************************************* >2018-10-02 08:43:17,352 p=1004 u=mistral | Tuesday 02 October 2018 08:43:17 -0400 (0:00:00.097) 0:14:30.086 ******* >2018-10-02 08:43:17,885 p=1004 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:43:17,902 p=1004 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:37,536 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:37,563 p=1004 u=mistral | TASK [Debug output for task: Start containers for step 2] ********************** >2018-10-02 08:50:37,563 p=1004 u=mistral | Tuesday 02 October 2018 08:50:37 -0400 (0:07:20.210) 0:21:50.296 ******* >2018-10-02 08:50:37,765 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-10-02 08:50:37,783 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-10-02 08:50:51,262 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "58cfa97883f0: Already exists", > "06bacefe1417: Pulling fs layer", > "06bacefe1417: Verifying Checksum", > "06bacefe1417: Download complete", > "06bacefe1417: Pull complete", > "Digest: sha256:270e3632d75065155103f336d6c9275a6f7a14ee5a0d089d8a0691c680fed78c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-engine ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-engine", > "d1bf34aac9d8: Already exists", > "47cb62b99d9b: Pulling fs layer", > "47cb62b99d9b: Verifying Checksum", > "47cb62b99d9b: Download complete", > "47cb62b99d9b: Pull complete", > "Digest: sha256:dae40346eab366b8f9f3a844861c7c0a29ea24ed57972fc958910e54c4eae446", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent", > "f3c66d22e08b: Already exists", > "eaee760df428: Pulling fs layer", > "eaee760df428: Verifying Checksum", > "eaee760df428: Download complete", > "eaee760df428: Pull complete", > "Digest: sha256:558c662f2c1b09369dbf1b1a5de368cb373d259c730f93361ba81d36cabd8045", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent", > "cda310d4305e: Pulling fs layer", > "cda310d4305e: Verifying Checksum", > "cda310d4305e: Download complete", > "cda310d4305e: Pull complete", > "Digest: sha256:810719b4dc49bc91096380c8dab4265a239fa3d0f9f0b12b52b6ca903c782c2f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", > "stdout: 5645efd71f8a722c685b5bdb2dc6dd35e6ef5b9e75d75e73e46021a7c4741027", > "stdout: ", > "stderr: Error: unable to find resource 'galera-bundle'", > "stdout: 25c4fce3cd5d8d014f680c8bcf2750506be69974b3d35b700a4380b1d7352725", > "stdout: 5210888119ffb14d12042704960bc575b6f633934c3a894e8f1f5b79220d1b6c", > "stdout: de7815ff3537926465a6ca59a1d0e07149047c73a2dd5c1c29ff6039678abf0a", > "stdout: Skipping execution since this is not the bootstrap node for this service.", > "stdout: b2f16a25bdfeea36822f487f7a03540fd1a340f871443d0104ee3fedae7cc312", > "stdout: f52546b2b4cd5436909f67a9f250046a09fedd279c40c949915e7efbf4a7442a", > "stdout: c70c6a37c734ba2af5ff13376e56d364ece8895e671a7416cd0299627fdc0869", > "stdout: a69f42b3b14bce40facd6b77ab24c477c9f37c3cac541f6f793f3a2ed14e783e", > "stdout: 37bf4a19d4fd0e2e515869894273074a48b634ee37ba0c7d191fddcadf193435", > "stdout: 41196e35c6790e38f9c502ef84283c5ef0ada22356a8ac23a325a293b06ce632", > "stdout: 294803e540a010299b3b14916d8da56e7e0dede112bb2334186ec5550dece732", > "stdout: 115fc7e21374256006212a127aa6c10492ac7c1e6cc71c1f561cce0780bbc74f", > "stdout: Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=US-ASCII", > "Debug: Evicting cache entry for environment 'production'", > "Debug: Caching environment 'production' (ttl = 0 sec)", > "Debug: Loading external facts from /etc/puppet/modules/openstacklib/facts.d", > "Debug: Loading external facts from /var/lib/puppet/facts.d", > "Info: Loading facts", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /etc/puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/docker_group_gid.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /etc/puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /etc/puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/docker_group_gid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Facter: Found no suitable resolves of 1 for ec2_metadata", > "Debug: Facter: value for ec2_metadata is still nil", > "Debug: Executing: '/usr/bin/rpm --version'", > "Debug: Failed to load library 'cfpropertylist' for feature 'cfpropertylist'", > "Debug: Executing: '/usr/bin/rpm -ql rpm'", > "Debug: Facter: value for agent_specified_environment is still nil", > "Debug: Facter: value for cfkey is still nil", > "Debug: Facter: Found no suitable resolves of 1 for dhcp_servers", > "Debug: Facter: value for dhcp_servers is still nil", > "Debug: Facter: Found no suitable resolves of 1 for ec2_userdata", > "Debug: Facter: value for ec2_userdata is still nil", > "Debug: Facter: Found no suitable resolves of 1 for gce", > "Debug: Facter: value for gce is still nil", > "Debug: Facter: value for ipaddress6_br_ex is still nil", > "Debug: Facter: value for ipaddress_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_br_isolated is still nil", > "Debug: Facter: value for netmask_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_docker0 is still nil", > "Debug: Facter: value for ipaddress6_eth0 is still nil", > "Debug: Facter: value for ipaddress_eth1 is still nil", > "Debug: Facter: value for ipaddress6_eth1 is still nil", > "Debug: Facter: value for netmask_eth1 is still nil", > "Debug: Facter: value for ipaddress_eth2 is still nil", > "Debug: Facter: value for ipaddress6_eth2 is still nil", > "Debug: Facter: value for netmask_eth2 is still nil", > "Debug: Facter: value for ipaddress6_lo is still nil", > "Debug: Facter: value for macaddress_lo is still nil", > "Debug: Facter: value for ipaddress_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_ovs_system is still nil", > "Debug: Facter: value for netmask_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_vlan20 is still nil", > "Debug: Facter: value for ipaddress6_vlan30 is still nil", > "Debug: Facter: value for ipaddress6_vlan40 is still nil", > "Debug: Facter: value for ipaddress6_vlan50 is still nil", > "Debug: Facter: value for ipaddress6 is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iphostnumber", > "Debug: Facter: value for iphostnumber is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistcodename", > "Debug: Facter: value for lsbdistcodename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistdescription", > "Debug: Facter: value for lsbdistdescription is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistid", > "Debug: Facter: value for lsbdistid is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistrelease", > "Debug: Facter: value for lsbdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbmajdistrelease", > "Debug: Facter: value for lsbmajdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbminordistrelease", > "Debug: Facter: value for lsbminordistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbrelease", > "Debug: Facter: value for lsbrelease is still nil", > "Debug: Facter: Found no suitable resolves of 2 for swapencrypted", > "Debug: Facter: value for swapencrypted is still nil", > "Debug: Facter: value for network_br_isolated is still nil", > "Debug: Facter: value for network_eth1 is still nil", > "Debug: Facter: value for network_eth2 is still nil", > "Debug: Facter: value for network_ovs_system is still nil", > "Debug: Facter: Found no suitable resolves of 1 for processor", > "Debug: Facter: value for processor is still nil", > "Debug: Facter: value for is_rsc is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_region", > "Debug: Facter: value for rsc_region is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_instance_id", > "Debug: Facter: value for rsc_instance_id is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_enforced", > "Debug: Facter: value for selinux_enforced is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_policyversion", > "Debug: Facter: value for selinux_policyversion is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_current_mode", > "Debug: Facter: value for selinux_current_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_mode", > "Debug: Facter: value for selinux_config_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_policy", > "Debug: Facter: value for selinux_config_policy is still nil", > "Debug: Facter: value for sshdsakey is still nil", > "Debug: Facter: value for sshfp_dsa is still nil", > "Debug: Facter: value for sshrsakey is still nil", > "Debug: Facter: value for sshfp_rsa is still nil", > "Debug: Facter: value for sshecdsakey is still nil", > "Debug: Facter: value for sshfp_ecdsa is still nil", > "Debug: Facter: value for sshed25519key is still nil", > "Debug: Facter: value for sshfp_ed25519 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for system32", > "Debug: Facter: value for system32 is still nil", > "Debug: Facter: value for vlans is still nil", > "Debug: Facter: Found no suitable resolves of 1 for xendomains", > "Debug: Facter: value for xendomains is still nil", > "Debug: Facter: value for zfs_version is still nil", > "Debug: Facter: Found no suitable resolves of 1 for zonename", > "Debug: Facter: value for zonename is still nil", > "Debug: Facter: value for zpool_version is still nil", > "Debug: Facter: value for collectd_version is still nil", > "Debug: Facter: value for mysql_version is still nil", > "Debug: Facter: value for mysqld_version is still nil", > "Debug: Facter: value for sensu_version is still nil", > "Debug: Facter: value for rabbitmq_nodename is still nil", > "Debug: Facter: value for rabbitmq_version is still nil", > "Debug: Facter: value for netmask6_ovs_system is still nil", > "Debug: Facter: value for nic_alias is still nil", > "Debug: Facter: value for docker_group_gid is still nil", > "Debug: Facter: value for ssh_server_version_full is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_major", > "Debug: Facter: value for ssh_server_version_major is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_release", > "Debug: Facter: value for ssh_server_version_release is still nil", > "Debug: Facter: value for ssh_client_version_full is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_client_version_major", > "Debug: Facter: value for ssh_client_version_major is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_client_version_release", > "Debug: Facter: value for ssh_client_version_release is still nil", > "Debug: Facter: value for java_version is still nil", > "Debug: Facter: value for java_major_version is still nil", > "Debug: Facter: value for java_default_home is still nil", > "Debug: Facter: value for java_libjvm_path is still nil", > "Debug: Facter: value for java_patch_level is still nil", > "Debug: Facter: Found no suitable resolves of 2 for staging_windir", > "Debug: Facter: value for staging_windir is still nil", > "Debug: Facter: value for redis_server_version is still nil", > "Debug: Facter: value for git_html_path is still nil", > "Debug: Facter: value for git_exec_path is still nil", > "Debug: Facter: value for git_version is still nil", > "Debug: Facter: value for sssd_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for archive_windir", > "Debug: Facter: value for archive_windir is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iptables_persistent_version", > "Debug: Facter: value for iptables_persistent_version is still nil", > "Debug: Facter: value for cassandrarelease is still nil", > "Debug: Facter: value for cassandraminorversion is still nil", > "Debug: Facter: value for cassandrapatchversion is still nil", > "Debug: Facter: value for cassandramajorversion is still nil", > "Debug: Facter: value for ovs_uuid is still nil", > "Debug: Facter: value for ovs_version is still nil", > "Debug: Puppet::Type::Package::ProviderSensu_gem: file /opt/sensu/embedded/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderTdagent: file /opt/td-agent/usr/sbin/td-agent-gem does not exist", > "Debug: Puppet::Type::Package::ProviderAix: file /usr/bin/lslpp does not exist", > "Debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not exist", > "Debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does not exist", > "Debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not exist", > "Debug: Puppet::Type::Package::ProviderDnf: file dnf does not exist", > "Debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist", > "Debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swinstall does not exist", > "Debug: Puppet::Type::Package::ProviderNim: file /usr/sbin/nimclient does not exist", > "Debug: Puppet::Type::Package::ProviderOpkg: file opkg does not exist", > "Debug: Puppet::Type::Package::ProviderPacman: file /usr/bin/pacman does not exist", > "Debug: Puppet::Type::Package::ProviderPkg: file /usr/bin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPkgin: file pkgin does not exist", > "Debug: Puppet::Type::Package::ProviderPkgng: file /usr/local/sbin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/emerge does not exist", > "Debug: Puppet::Type::Package::ProviderPorts: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPortupgrade: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPuppet_gem: file /opt/puppetlabs/puppet/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist", > "Debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not exist", > "Debug: Puppet::Type::Package::ProviderTdnf: file tdnf does not exist", > "Debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox does not exist", > "Debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist", > "Debug: Puppet::Type::Package::ProviderZypper: file /usr/bin/zypper does not exist", > "Debug: Facter: value for pe_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_major_version", > "Debug: Facter: value for pe_major_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_minor_version", > "Debug: Facter: value for pe_minor_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_patch_version", > "Debug: Facter: value for pe_patch_version is still nil", > "Debug: Puppet::Type::Service::ProviderNoop: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderInit: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does not exist", > "Debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d does not exist", > "Debug: Puppet::Type::Service::ProviderGentoo: file /sbin/rc-update does not exist", > "Debug: Puppet::Type::Service::ProviderLaunchd: file /bin/launchctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenbsd: file /usr/sbin/rcctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenrc: file /bin/rc-status does not exist", > "Debug: Puppet::Type::Service::ProviderRedhat: file /sbin/service does not exist", > "Debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist", > "Debug: Puppet::Type::Service::ProviderUpstart: 0 confines (of 4) were true", > "Debug: Facter: value for apache_version is still nil", > "Debug: Facter: value for ipa_hostname is still nil", > "Debug: Facter: value for libvirt_uuid is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/pacemaker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::pacemaker from tripleo/profile/base/pacemaker into production", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Debug: hiera(): Hiera JSON backend starting", > "Debug: hiera(): Looking up lookup_options in JSON backend", > "Debug: hiera(): Looking for data source docker", > "Debug: hiera(): Looking for data source heat_config_", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/heat_config_.json, skipping", > "Debug: hiera(): Looking for data source config_step", > "Debug: hiera(): Looking for data source controller_extraconfig", > "Debug: hiera(): Looking for data source extraconfig", > "Debug: hiera(): Looking for data source service_names", > "Debug: hiera(): Looking for data source service_configs", > "Debug: hiera(): Looking for data source controller", > "Debug: hiera(): Looking for data source bootstrap_node", > "Debug: hiera(): Looking for data source all_nodes", > "Debug: hiera(): Looking for data source vip_data", > "Debug: hiera(): Looking for data source net_ip_map", > "Debug: hiera(): Looking for data source RedHat", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/RedHat.json, skipping", > "Debug: hiera(): Looking for data source neutron_bigswitch_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_bigswitch_data.json, skipping", > "Debug: hiera(): Looking for data source neutron_cisco_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_cisco_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_n1kv_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_n1kv_data.json, skipping", > "Debug: hiera(): Looking for data source midonet_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/midonet_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_aci_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_aci_data.json, skipping", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_node_ips in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_authkey in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::encryption in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::enable_instanceha in JSON backend", > "Debug: hiera(): Looking up step in JSON backend", > "Debug: hiera(): Looking up pcs_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_node_ips in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker_cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::instanceha in JSON backend", > "Debug: hiera(): Looking up hacluster_pwd in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_fencing in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_node_names in JSON backend", > "Debug: hiera(): Looking up corosync_ipv6 in JSON backend", > "Debug: hiera(): Looking up corosync_token_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/init.pp' in environment production", > "Debug: Automatically imported pacemaker from pacemaker into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/params.pp' in environment production", > "Debug: Automatically imported pacemaker::params from pacemaker/params into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/install.pp' in environment production", > "Debug: Automatically imported pacemaker::install from pacemaker/install into production", > "Debug: hiera(): Looking up pacemaker::install::ensure in JSON backend", > "Debug: Resource package[pacemaker] was not determined to be defined", > "Debug: Create new resource package[pacemaker] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pcs] was not determined to be defined", > "Debug: Create new resource package[pcs] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[fence-agents-all] was not determined to be defined", > "Debug: Create new resource package[fence-agents-all] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pacemaker-libs] was not determined to be defined", > "Debug: Create new resource package[pacemaker-libs] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/service.pp' in environment production", > "Debug: Automatically imported pacemaker::service from pacemaker/service into production", > "Debug: hiera(): Looking up pacemaker::service::ensure in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasstatus in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasrestart in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/corosync.pp' in environment production", > "Debug: Automatically imported pacemaker::corosync from pacemaker/corosync into production", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_members_rrp in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_name in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::manage_fw in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::pcsd_debug in JSON backend", > "Debug: pcmk_nodes_added: []", > "Debug: template[inline]: Bound template variables for inline template in 0.00 seconds", > "Debug: template[inline]: Interpolated template inline template in 0.00 seconds", > "Debug: hiera(): Looking up docker_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/systemd/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/systemctl/daemon_reload.pp' in environment production", > "Debug: Automatically imported systemd::systemctl::daemon_reload from systemd/systemctl/daemon_reload into production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/unit_file.pp' in environment production", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/init.pp' in environment production", > "Debug: Automatically imported systemd::unit_file from systemd/unit_file into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/stonith.pp' in environment production", > "Debug: Automatically imported pacemaker::stonith from pacemaker/stonith into production", > "Debug: hiera(): Looking up pacemaker::stonith::try_sleep in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/property.pp' in environment production", > "Debug: Automatically imported pacemaker::property from pacemaker/property into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource_defaults.pp' in environment production", > "Debug: Automatically imported pacemaker::resource_defaults from pacemaker/resource_defaults into production", > "Debug: hiera(): Looking up pacemaker::resource_defaults::defaults in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::post_success_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::verify_on_create in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/rabbitmq_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::rabbitmq_bundle from tripleo/profile/pacemaker/rabbitmq_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rabbitmq_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rabbitmq_docker_control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::erlang_cookie in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::user_ha_queues in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::erlang_cookie in JSON backend", > "Debug: hiera(): Looking up rabbitmq::nr_ha_queues in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_node_names in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_node_names in JSON backend", > "Debug: hiera(): Looking up enable_internal_tls in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/rabbitmq.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::rabbitmq from tripleo/profile/base/rabbitmq into production", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::config_variables in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::environment in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::ssl_versions in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::inter_node_ciphers in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::inet_dist_interface in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::ipv6 in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::kernel_variables in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rabbitmq_pass in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rabbitmq_user in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::stack_action in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::step in JSON backend", > "Debug: hiera(): Looking up rabbitmq_config_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq_environment in JSON backend", > "Debug: hiera(): Looking up rabbitmq::interface in JSON backend", > "Debug: hiera(): Looking up internal_api in JSON backend", > "Debug: hiera(): Looking up rabbit_ipv6 in JSON backend", > "Debug: hiera(): Looking up rabbitmq_kernel_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::default_pass in JSON backend", > "Debug: hiera(): Looking up rabbitmq::default_user in JSON backend", > "Debug: hiera(): Looking up stack_action in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/init.pp' in environment production", > "Debug: Automatically imported rabbitmq from rabbitmq into production", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/params.pp' in environment production", > "Debug: Automatically imported rabbitmq::params from rabbitmq/params into production", > "Debug: hiera(): Looking up rabbitmq::admin_enable in JSON backend", > "Debug: hiera(): Looking up rabbitmq::cluster_node_type in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_ranch in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_stomp in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_shovel in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_shovel_statics in JSON backend", > "Debug: hiera(): Looking up rabbitmq::delete_guest_user in JSON backend", > "Debug: hiera(): Looking up rabbitmq::env_config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::env_config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_ip_address in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_hostname in JSON backend", > "Debug: hiera(): Looking up rabbitmq::node_ip_address in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_apt_pin in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_gpg_key in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_name in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_source in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_provider in JSON backend", > "Debug: hiera(): Looking up rabbitmq::repos_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::manage_python in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_user in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_group in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_home in JSON backend", > "Debug: hiera(): Looking up rabbitmq::port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_keepalive in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_backlog in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_sndbuf in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_recbuf in JSON backend", > "Debug: hiera(): Looking up rabbitmq::heartbeat in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_name in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_only in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cacert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_key in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_depth in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cert_password in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_interface in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_stomp_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_verify in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_fail_if_no_peer_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_verify in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_fail_if_no_peer_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_versions in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_secure_renegotiate in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_reuse_sessions in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_honor_cipher_order in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_dhfile in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_ciphers in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_auth in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_server in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_user_dn_pattern in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_other_bind in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_use_ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_log in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_config_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_ssl_only in JSON backend", > "Debug: hiera(): Looking up rabbitmq::wipe_db_on_cookie_change in JSON backend", > "Debug: hiera(): Looking up rabbitmq::cluster_partition_handling in JSON backend", > "Debug: hiera(): Looking up rabbitmq::file_limit in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_management_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_additional_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::auth_backends in JSON backend", > "Debug: hiera(): Looking up rabbitmq::key_content in JSON backend", > "Debug: hiera(): Looking up rabbitmq::collect_statistics_interval in JSON backend", > "Debug: hiera(): Looking up rabbitmq::inetrc_config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::inetrc_config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_erl_dist in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmqadmin_package in JSON backend", > "Debug: hiera(): Looking up rabbitmq::archive_options in JSON backend", > "Debug: hiera(): Looking up rabbitmq::loopback_users in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/install.pp' in environment production", > "Debug: Automatically imported rabbitmq::install from rabbitmq/install into production", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/config.pp' in environment production", > "Debug: Automatically imported rabbitmq::config from rabbitmq/config into production", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq.config.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq-env.conf.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/inetrc.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/inetrc.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/inetrc.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/inetrc.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/inetrc.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmqadmin.conf.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq-server.service.d/limits.conf", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/limits.conf", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/limits.conf]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/limits.conf in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/limits.conf]: Interpolated template /etc/puppet/modules/rabbitmq/templates/limits.conf in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/service.pp' in environment production", > "Debug: Automatically imported rabbitmq::service from rabbitmq/service into production", > "Debug: hiera(): Looking up rabbitmq::service::service_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service::service_manage in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service::service_name in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/management.pp' in environment production", > "Debug: Automatically imported rabbitmq::management from rabbitmq/management into production", > "Debug: hiera(): Looking up veritas_hyperscale_controller_enabled in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_short_node_names in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/bundle.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::bundle from pacemaker/resource/bundle into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/ocf.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::ocf from pacemaker/resource/ocf into production", > "Debug: hiera(): Looking up systemd::service_limits in JSON backend", > "Debug: hiera(): Looking up systemd::manage_resolved in JSON backend", > "Debug: hiera(): Looking up systemd::resolved_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_networkd in JSON backend", > "Debug: hiera(): Looking up systemd::networkd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_timesyncd in JSON backend", > "Debug: hiera(): Looking up systemd::timesyncd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::ntp_server in JSON backend", > "Debug: hiera(): Looking up systemd::fallback_ntp_server in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::bundle::deep_compare in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::bundle::update_settle_secs in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::ocf::deep_compare in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::ocf::update_settle_secs in JSON backend", > "Debug: Adding relationship from Service[pcsd] to Exec[auth-successful-across-all-nodes] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[corosync] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[pacemaker] with 'before'", > "Debug: Adding relationship from Service[corosync] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker-authkey] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[rabbitmq] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property--stonith-enabled] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-rabbitmq-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[rabbitmq-bundle] with 'before'", > "Debug: Adding relationship from Class[Pacemaker] to Class[Pacemaker::Corosync] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/resource-agents-deps.target.wants] to Systemd::Unit_file[docker.service] with 'before'", > "Debug: Adding relationship from Systemd::Unit_file[docker.service] to Class[Systemd::Systemctl::Daemon_reload] with 'notify'", > "Debug: Adding relationship from File[/etc/systemd/system/rabbitmq-server.service.d] to File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf] with 'before'", > "Debug: Adding relationship from Class[Rabbitmq::Install] to Class[Rabbitmq::Config] with 'before'", > "Debug: Adding relationship from Class[Rabbitmq::Config] to Class[Rabbitmq::Service] with 'notify'", > "Debug: Adding relationship from Class[Rabbitmq::Service] to Class[Rabbitmq::Management] with 'before'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.62 seconds", > "Debug: puppet-pacemaker: initialize()", > "Debug: Creating default schedules", > "Info: Applying configuration version '1538484225'", > "Debug: /Stage[main]/Pacemaker/before: subscribes to Class[Pacemaker::Corosync]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Exec[auth-successful-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/before: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/notify: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/notify: subscribes to Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/require: subscribes to User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/require: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[corosync]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[pacemaker]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[rabbitmq]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property--stonith-enabled]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-rabbitmq-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/before: subscribes to Systemd::Unit_file[docker.service]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/before: subscribes to Class[Pacemaker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Rabbitmq::Install/before: subscribes to Class[Rabbitmq::Config]", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/require: subscribes to File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/before: subscribes to File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/notify: subscribes to Exec[rabbitmq-systemd-reload]", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]/before: subscribes to File[rabbitmq.config]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Service/before: subscribes to Class[Rabbitmq::Management]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]/require: subscribes to Class[Rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/require: subscribes to Class[Rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/require: subscribes to Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/before: subscribes to Exec[rabbitmq-ready]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Adding autorequire relationship with File[/etc/systemd/system/resource-agents-deps.target.wants]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Stage[main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Settings]: Resource is being skipped, unscheduling all events", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Install]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching yum resources for package", > "Debug: Executing '/usr/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n''", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Service]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Systemd::Unit_file[docker.service]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Stonith]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Property[Disable STONITH]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Resource_defaults]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Base::Rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Config]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}279e42511ea04897e294829a576d05d5'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: Scheduling refresh of Class[Rabbitmq::Service]", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: Scheduling refresh of Class[Rabbitmq::Service]", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Info: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]: Scheduling refresh of Exec[rabbitmq-systemd-reload]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Resource is being skipped, unscheduling all events", > "Info: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Unscheduling all events on Exec[rabbitmq-systemd-reload]", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]: Scheduling refresh of Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /etc/rabbitmq/rabbitmq.config", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Filebucketed /etc/rabbitmq/rabbitmq.config to puppet with sum b346ec0a8320f85f795bf612f6b02da7", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}fe360f3aa9a3f3f3b4a3e450796bb7c1'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Scheduling refresh of Class[Rabbitmq::Service]", > "Info: Class[Rabbitmq::Config]: Unscheduling all events on Class[Rabbitmq::Config]", > "Debug: Class[Rabbitmq::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Service]: Resource is being skipped, unscheduling all events", > "Info: Class[Rabbitmq::Service]: Unscheduling all events on Class[Rabbitmq::Service]", > "Debug: Class[Rabbitmq::Management]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Management]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /var/lib/rabbitmq/.erlang.cookie", > "Info: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]: Filebucketed /var/lib/rabbitmq/.erlang.cookie to puppet with sum 96f64654fb6230682c8ca1b1835bf8ca", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]/content: content changed '{md5}96f64654fb6230682c8ca1b1835bf8ca' to '{md5}08161ebe5401a17476d2c5e9130f8303'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]: The container Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle] will propagate my refresh event", > "Debug: Pacemaker::Property[rabbitmq-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Property[rabbitmq-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Systemd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/mode: Not managing symlink mode", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: The container Systemd::Unit_file[docker.service] will propagate my refresh event", > "Info: Systemd::Unit_file[docker.service]: Unscheduling all events on Systemd::Unit_file[docker.service]", > "Info: Class[Tripleo::Profile::Base::Pacemaker]: Unscheduling all events on Class[Tripleo::Profile::Base::Pacemaker]", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}85274b5d58af3572868d4ef10722b50f'", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Resource is being skipped, unscheduling all events", > "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Resource is being skipped, unscheduling all events", > "Info: Class[Systemd::Systemctl::Daemon_reload]: Unscheduling all events on Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-17ge7a returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-17ge7a property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: property exists: property show | grep stonith-enabled | grep false > /dev/null 2>&1 -> ", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-bb1qe4 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-bb1qe4 property show | grep rabbitmq-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep rabbitmq-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-15macub returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-15macub property set --node controller-0 rabbitmq-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-15macub diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-15macub.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 rabbitmq-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/Pcmk_property[property-controller-0-rabbitmq-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/Pcmk_property[property-controller-0-rabbitmq-role]: The container Pacemaker::Property[rabbitmq-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[rabbitmq-role-controller-0]: Unscheduling all events on Pacemaker::Property[rabbitmq-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-mot91m returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-mot91m constraint list | grep location-rabbitmq-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-13cgeq3 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-13cgeq3 resource show rabbitmq-bundle > /dev/null 2>&1", > "Debug: Exists: bundle rabbitmq-bundle exists 1 location exists 1 deep_compare: true", > "Debug: Create: resource exists 1 location exists 1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1paexdy returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1paexdy resource bundle create rabbitmq-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest replicas=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=rabbitmq-cfg-files source-dir=/var/lib/kolla/config_files/rabbitmq.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=rabbitmq-cfg-data source-dir=/var/lib/config-data/puppet-generated/rabbitmq/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=rabbitmq-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=rabbitmq-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=rabbitmq-lib source-dir=/var/lib/rabbitmq target-dir=/var/lib/rabbitmq options=rw storage-map id=rabbitmq-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=rabbitmq-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=rabbitmq-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=rabbitmq-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=rabbitmq-log source-dir=/var/log/containers/rabbitmq target-dir=/var/log/rabbitmq options=rw storage-map id=rabbitmq-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3122 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1paexdy diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1paexdy.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: location_rule_create: constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-lz1yt7 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-lz1yt7 constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-lz1yt7 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-lz1yt7.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-127c0v2 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-127c0v2 resource enable rabbitmq-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-127c0v2 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-127c0v2.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Bundle[rabbitmq-bundle]/Pcmk_bundle[rabbitmq-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Bundle[rabbitmq-bundle]/Pcmk_bundle[rabbitmq-bundle]: The container Pacemaker::Resource::Bundle[rabbitmq-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: Pacemaker::Resource::Ocf[rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Resource::Ocf[rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-uxp8jv returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-uxp8jv constraint list | grep location-rabbitmq-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-10405zt returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-10405zt resource show rabbitmq > /dev/null 2>&1", > "Debug: Exists: resource rabbitmq exists 1 location exists 0 resource deep_compare: true", > "Debug: Create: resource exists 1 location exists 0", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ei05zi returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ei05zi resource create rabbitmq ocf:heartbeat:rabbitmq-cluster set_policy='ha-all ^(?!amq\\.).* {\"ha-mode\":\"exactly\",\"ha-params\":1}' meta notify=true container-attribute-target=host op start timeout=200s stop timeout=200s bundle rabbitmq-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ei05zi diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ei05zi.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/Pcmk_resource[rabbitmq]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/Pcmk_resource[rabbitmq]: The container Pacemaker::Resource::Ocf[rabbitmq] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[rabbitmq]: Unscheduling all events on Pacemaker::Resource::Ocf[rabbitmq]", > "Debug: Exec[rabbitmq-ready](provider=posix): Executing check 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: Executing: 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: Error: Failed to initialize erlang distribution: {{shutdown,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {failed_to_start_child,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: net_kernel,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {'EXIT',nodistribution}}},", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {child,undefined,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: net_sup_dynamic,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {erl_distribution,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: start_link,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: [['rabbitmq-cli-24',", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: shortnames]]},", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: permanent,1000,supervisor,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: [erl_distribution]}}.", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 1/180", > "Debug: Exec[rabbitmq-ready](provider=posix): Executing 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Sleeping for 10 seconds between tries", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 2/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 3/180", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]: The container Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle] will propagate my refresh event", > "Info: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Unscheduling all events on Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[puppet]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[hourly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[daily]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[weekly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[monthly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[never]: Resource is being skipped, unscheduling all events", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Filebucket[puppet]: Resource is being skipped, unscheduling all events", > "Debug: Finishing transaction 44021480", > "Debug: Storing state", > "Info: Creating state file /var/lib/puppet/state/state.yaml", > "Debug: Stored state in 0.00 seconds", > "Notice: Applied catalog in 66.73 seconds", > "Changes:", > " Total: 21", > "Events:", > " Success: 21", > "Resources:", > " Changed: 18", > " Out of sync: 18", > " Skipped: 25", > " Total: 45", > "Time:", > " File line: 0.00", > " File: 0.05", > " Config retrieval: 1.77", > " Pcmk resource: 11.16", > " Last run: 1538484294", > " Pcmk bundle: 19.76", > " Exec: 25.88", > " Total: 68.23", > " Pcmk property: 9.61", > "Version:", > " Config: 1538484225", > " Puppet: 4.8.2", > "Debug: Applying settings catalog for sections main, reporting, metrics", > "Debug: Using settings: adding file resource 'confdir': 'File[/etc/puppet]{:path=>\"/etc/puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'vardir': 'File[/var/lib/puppet]{:path=>\"/var/lib/puppet\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'logdir': 'File[/var/log/puppet]{:path=>\"/var/log/puppet\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'statedir': 'File[/var/lib/puppet/state]{:path=>\"/var/lib/puppet/state\", :mode=>\"1755\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'rundir': 'File[/var/run/puppet]{:path=>\"/var/run/puppet\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'libdir': 'File[/var/lib/puppet/lib]{:path=>\"/var/lib/puppet/lib\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'hiera_config': 'File[/etc/puppet/hiera.yaml]{:path=>\"/etc/puppet/hiera.yaml\", :ensure=>:file, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'preview_outputdir': 'File[/var/lib/puppet/preview]{:path=>\"/var/lib/puppet/preview\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'certdir': 'File[/etc/puppet/ssl/certs]{:path=>\"/etc/puppet/ssl/certs\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'ssldir': 'File[/etc/puppet/ssl]{:path=>\"/etc/puppet/ssl\", :mode=>\"771\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'publickeydir': 'File[/etc/puppet/ssl/public_keys]{:path=>\"/etc/puppet/ssl/public_keys\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'requestdir': 'File[/etc/puppet/ssl/certificate_requests]{:path=>\"/etc/puppet/ssl/certificate_requests\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatekeydir': 'File[/etc/puppet/ssl/private_keys]{:path=>\"/etc/puppet/ssl/private_keys\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatedir': 'File[/etc/puppet/ssl/private]{:path=>\"/etc/puppet/ssl/private\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'pluginfactdest': 'File[/var/lib/puppet/facts.d]{:path=>\"/var/lib/puppet/facts.d\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: /File[/var/lib/puppet/state]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/var/lib/puppet/lib]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/hiera.yaml]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/var/lib/puppet/preview]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/ssl/certs]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/etc/puppet/ssl/public_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/certificate_requests]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/var/lib/puppet/facts.d]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: Finishing transaction 41917740", > "Debug: Received report to process from controller-0.localdomain", > "Debug: Processing report from controller-0.localdomain with processor Puppet::Reports::Store", > "stderr: + STEP=2", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle'", > "+ EXTRA_ARGS=--debug", > "+ '[' -d /tmp/puppet-etc ']'", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ echo '{\"step\": 2}'", > "+ export FACTER_uuid=docker", > "+ FACTER_uuid=docker", > "+ set +e", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle'", > "Warning: Facter: Could not retrieve fact='rabbitmq_nodename', resolution='<anonymous>': undefined method `[]' for nil:NilClass", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rc=2", > "+ set -e", > "+ set +ux", > "Debug: Facter: value for erl_ssl_path is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::database::mysql_bundle from tripleo/profile/pacemaker/database/mysql_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::mysql_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::bind_address in JSON backend", > "Debug: hiera(): Looking up fqdn_internal_api in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::ca_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::cipher_list in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::gcomm_cipher in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::gmcast_listen_addr in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::innodb_flush_log_at_trx_commit in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::sst_tls_cipher in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::sst_tls_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::ipv6 in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::step in JSON backend", > "Debug: hiera(): Looking up mysql_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::certificate_specs in JSON backend", > "Debug: hiera(): Looking up mysql_bind_host in JSON backend", > "Debug: hiera(): Looking up innodb_flush_log_at_trx_commit in JSON backend", > "Debug: hiera(): Looking up mysql_ipv6 in JSON backend", > "Debug: hiera(): Looking up mysql_short_node_names in JSON backend", > "Debug: hiera(): Looking up mysql_node_names in JSON backend", > "Debug: hiera(): Looking up mysql_max_connections in JSON backend", > "Debug: hiera(): Looking up mysql::server::root_password in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::database::mysql from tripleo/profile/base/database/mysql into production", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::bind_address in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::generate_dropin_file_limit in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::innodb_buffer_pool_size in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::mysql_max_connections in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::step in JSON backend", > "Debug: hiera(): Looking up innodb_buffer_pool_size in JSON backend", > "Debug: hiera(): Looking up enable_galera in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server.pp' in environment production", > "Debug: Automatically imported mysql::server from mysql/server into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/params.pp' in environment production", > "Debug: Automatically imported mysql::params from mysql/params into production", > "Debug: hiera(): Looking up mysql::server::includedir in JSON backend", > "Debug: hiera(): Looking up mysql::server::install_options in JSON backend", > "Debug: hiera(): Looking up mysql::server::install_secret_file in JSON backend", > "Debug: hiera(): Looking up mysql::server::manage_config_file in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_manage in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_name in JSON backend", > "Debug: hiera(): Looking up mysql::server::purge_conf_dir in JSON backend", > "Debug: hiera(): Looking up mysql::server::restart in JSON backend", > "Debug: hiera(): Looking up mysql::server::root_group in JSON backend", > "Debug: hiera(): Looking up mysql::server::mysql_group in JSON backend", > "Debug: hiera(): Looking up mysql::server::service_name in JSON backend", > "Debug: hiera(): Looking up mysql::server::service_provider in JSON backend", > "Debug: hiera(): Looking up mysql::server::users in JSON backend", > "Debug: hiera(): Looking up mysql::server::grants in JSON backend", > "Debug: hiera(): Looking up mysql::server::databases in JSON backend", > "Debug: hiera(): Looking up mysql::server::enabled in JSON backend", > "Debug: hiera(): Looking up mysql::server::manage_service in JSON backend", > "Debug: hiera(): Looking up mysql::server::old_root_password in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/db.pp' in environment production", > "Debug: Automatically imported mysql::db from mysql/db into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/config.pp' in environment production", > "Debug: Automatically imported mysql::server::config from mysql/server/config into production", > "Debug: Scope(Class[Mysql::Server::Config]): Retrieving template mysql/my.cnf.erb", > "Debug: template[/etc/puppet/modules/mysql/templates/my.cnf.erb]: Bound template variables for /etc/puppet/modules/mysql/templates/my.cnf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/mysql/templates/my.cnf.erb]: Interpolated template /etc/puppet/modules/mysql/templates/my.cnf.erb in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/install.pp' in environment production", > "Debug: Automatically imported mysql::server::install from mysql/server/install into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/binarylog.pp' in environment production", > "Debug: Automatically imported mysql::server::binarylog from mysql/server/binarylog into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/installdb.pp' in environment production", > "Debug: Automatically imported mysql::server::installdb from mysql/server/installdb into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/service.pp' in environment production", > "Debug: Automatically imported mysql::server::service from mysql/server/service into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/root_password.pp' in environment production", > "Debug: Automatically imported mysql::server::root_password from mysql/server/root_password into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/providers.pp' in environment production", > "Debug: Automatically imported mysql::server::providers from mysql/server/providers into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/account_security.pp' in environment production", > "Debug: Automatically imported mysql::server::account_security from mysql/server/account_security into production", > "Debug: hiera(): Looking up aodh_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/aodh/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/aodh/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported aodh::db::mysql from aodh/db/mysql into production", > "Debug: hiera(): Looking up aodh::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/aodh/manifests/deps.pp' in environment production", > "Debug: Automatically imported aodh::deps from aodh/deps into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/db.pp' in environment production", > "Debug: Automatically imported oslo::db from oslo/db into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/policy/base.pp' in environment production", > "Debug: Automatically imported openstacklib::policy::base from openstacklib/policy/base into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported openstacklib::db::mysql from openstacklib/db/mysql into production", > "Debug: hiera(): Looking up ceilometer_collector_enabled in JSON backend", > "Debug: hiera(): Looking up cinder_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported cinder::db::mysql from cinder/db/mysql into production", > "Debug: hiera(): Looking up cinder::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/deps.pp' in environment production", > "Debug: Automatically imported cinder::deps from cinder/deps into production", > "Debug: hiera(): Looking up barbican_api_enabled in JSON backend", > "Debug: hiera(): Looking up congress_enabled in JSON backend", > "Debug: hiera(): Looking up designate_api_enabled in JSON backend", > "Debug: hiera(): Looking up glance_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/glance/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/glance/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported glance::db::mysql from glance/db/mysql into production", > "Debug: hiera(): Looking up glance::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/glance/manifests/deps.pp' in environment production", > "Debug: Automatically imported glance::deps from glance/deps into production", > "Debug: hiera(): Looking up gnocchi_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported gnocchi::db::mysql from gnocchi/db/mysql into production", > "Debug: hiera(): Looking up gnocchi::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/deps.pp' in environment production", > "Debug: Automatically imported gnocchi::deps from gnocchi/deps into production", > "Debug: hiera(): Looking up heat_engine_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/heat/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/heat/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported heat::db::mysql from heat/db/mysql into production", > "Debug: hiera(): Looking up heat::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/heat/manifests/deps.pp' in environment production", > "Debug: Automatically imported heat::deps from heat/deps into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/cache.pp' in environment production", > "Debug: Automatically imported oslo::cache from oslo/cache into production", > "Debug: hiera(): Looking up ironic_api_enabled in JSON backend", > "Debug: hiera(): Looking up ironic_inspector_enabled in JSON backend", > "Debug: hiera(): Looking up keystone_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/keystone/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/keystone/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported keystone::db::mysql from keystone/db/mysql into production", > "Debug: hiera(): Looking up keystone::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/keystone/manifests/deps.pp' in environment production", > "Debug: Automatically imported keystone::deps from keystone/deps into production", > "Debug: hiera(): Looking up manila_api_enabled in JSON backend", > "Debug: hiera(): Looking up mistral_api_enabled in JSON backend", > "Debug: hiera(): Looking up neutron_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/neutron/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/neutron/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported neutron::db::mysql from neutron/db/mysql into production", > "Debug: hiera(): Looking up neutron::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/neutron/manifests/deps.pp' in environment production", > "Debug: Automatically imported neutron::deps from neutron/deps into production", > "Debug: hiera(): Looking up nova_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported nova::db::mysql from nova/db/mysql into production", > "Debug: hiera(): Looking up nova::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::setup_cell0 in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/deps.pp' in environment production", > "Debug: Automatically imported nova::deps from nova/deps into production", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql_api.pp' in environment production", > "Debug: Automatically imported nova::db::mysql_api from nova/db/mysql_api into production", > "Debug: hiera(): Looking up nova::db::mysql_api::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up nova_placement_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql_placement.pp' in environment production", > "Debug: Automatically imported nova::db::mysql_placement from nova/db/mysql_placement into production", > "Debug: hiera(): Looking up nova::db::mysql_placement::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up octavia_api_enabled in JSON backend", > "Debug: hiera(): Looking up sahara_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/sahara/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/sahara/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported sahara::db::mysql from sahara/db/mysql into production", > "Debug: hiera(): Looking up sahara::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/sahara/manifests/deps.pp' in environment production", > "Debug: Automatically imported sahara::deps from sahara/deps into production", > "Debug: hiera(): Looking up tacker_enabled in JSON backend", > "Debug: hiera(): Looking up trove_api_enabled in JSON backend", > "Debug: hiera(): Looking up panko_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/panko/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/panko/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported panko::db::mysql from panko/db/mysql into production", > "Debug: hiera(): Looking up panko::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/panko/manifests/deps.pp' in environment production", > "Debug: Automatically imported panko::deps from panko/deps into production", > "Debug: hiera(): Looking up ec2_api_enabled in JSON backend", > "Debug: hiera(): Looking up zaqar_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/client.pp' in environment production", > "Debug: Automatically imported mysql::client from mysql/client into production", > "Debug: hiera(): Looking up mysql::client::bindings_enable in JSON backend", > "Debug: hiera(): Looking up mysql::client::install_options in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_manage in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_name in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/client/install.pp' in environment production", > "Debug: Automatically imported mysql::client::install from mysql/client/install into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp' in environment production", > "Debug: Automatically imported openstacklib::db::mysql::host_access from openstacklib/db/mysql/host_access into production", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[galera] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-galera-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[galera-bundle] with 'before'", > "Debug: Adding relationship from Anchor[mysql::server::start] to Class[Mysql::Server::Install] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Install] to Class[Mysql::Server::Config] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Config] to Class[Mysql::Server::Binarylog] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Binarylog] to Class[Mysql::Server::Installdb] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Installdb] to Class[Mysql::Server::Service] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Service] to Class[Mysql::Server::Root_password] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Root_password] to Class[Mysql::Server::Providers] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Providers] to Anchor[mysql::server::end] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from Anchor[aodh::install::end] to Anchor[aodh::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[aodh::config::end] to Anchor[aodh::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[aodh::db::begin] to Anchor[aodh::db::end] with 'before'", > "Debug: Adding relationship from Anchor[aodh::db::end] to Anchor[aodh::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::dbsync::begin] to Anchor[aodh::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[aodh::dbsync::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::install::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::config::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::db::begin] to Class[Aodh::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Aodh::Db::Mysql] to Anchor[aodh::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::begin] to Anchor[cinder::db::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::end] to Anchor[cinder::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::dbsync::begin] to Anchor[cinder::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::dbsync::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::db::begin] to Class[Cinder::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Cinder::Db::Mysql] to Anchor[cinder::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[glance::install::end] to Anchor[glance::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[glance::config::end] to Anchor[glance::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[glance::db::begin] to Anchor[glance::db::end] with 'before'", > "Debug: Adding relationship from Anchor[glance::db::end] to Anchor[glance::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::dbsync::begin] to Anchor[glance::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[glance::dbsync::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::install::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::config::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::db::begin] to Class[Glance::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Glance::Db::Mysql] to Anchor[glance::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::install::end] to Anchor[gnocchi::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::config::end] to Anchor[gnocchi::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::db::begin] to Anchor[gnocchi::db::end] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::db::end] to Anchor[gnocchi::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::dbsync::begin] to Anchor[gnocchi::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::dbsync::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::install::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::config::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::db::begin] to Class[Gnocchi::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Gnocchi::Db::Mysql] to Anchor[gnocchi::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[heat::install::end] to Anchor[heat::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[heat::config::end] to Anchor[heat::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[heat::db::begin] to Anchor[heat::db::end] with 'before'", > "Debug: Adding relationship from Anchor[heat::db::end] to Anchor[heat::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::dbsync::begin] to Anchor[heat::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[heat::dbsync::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::install::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::config::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::db::begin] to Class[Heat::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Heat::Db::Mysql] to Anchor[heat::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::install::end] to Anchor[keystone::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[keystone::config::end] to Anchor[keystone::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[keystone::db::begin] to Anchor[keystone::db::end] with 'before'", > "Debug: Adding relationship from Anchor[keystone::db::end] to Anchor[keystone::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::dbsync::begin] to Anchor[keystone::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[keystone::dbsync::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::install::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::config::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::db::begin] to Class[Keystone::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Keystone::Db::Mysql] to Anchor[keystone::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::install::end] to Anchor[neutron::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[neutron::config::end] to Anchor[neutron::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[neutron::db::begin] to Anchor[neutron::db::end] with 'before'", > "Debug: Adding relationship from Anchor[neutron::db::end] to Anchor[neutron::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::dbsync::begin] to Anchor[neutron::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[neutron::dbsync::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::install::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::config::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::db::begin] to Class[Neutron::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Neutron::Db::Mysql] to Anchor[neutron::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::install::end] to Anchor[nova::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[nova::config::end] to Anchor[nova::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Anchor[nova::db::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::install::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::config::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::dbsync_api::begin] to Anchor[nova::dbsync_api::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::dbsync::begin] to Anchor[nova::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::cell_v2::begin] to Anchor[nova::cell_v2::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db_online_data_migrations::begin] to Anchor[nova::db_online_data_migrations::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql_api] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql_api] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql_placement] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql_placement] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::install::end] to Anchor[sahara::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[sahara::config::end] to Anchor[sahara::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[sahara::db::begin] to Anchor[sahara::db::end] with 'before'", > "Debug: Adding relationship from Anchor[sahara::db::end] to Anchor[sahara::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::dbsync::begin] to Anchor[sahara::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[sahara::dbsync::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::install::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::config::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::db::begin] to Class[Sahara::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Sahara::Db::Mysql] to Anchor[sahara::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[panko::install::end] to Anchor[panko::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[panko::config::end] to Anchor[panko::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[panko::db::begin] to Anchor[panko::db::end] with 'before'", > "Debug: Adding relationship from Anchor[panko::db::end] to Anchor[panko::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::dbsync::begin] to Anchor[panko::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[panko::dbsync::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::install::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::config::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::db::begin] to Class[Panko::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Panko::Db::Mysql] to Anchor[panko::db::end] with 'notify'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@172.17.1.20/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@172.17.1.28/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@172.17.1.20/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@172.17.1.28/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@172.17.1.20/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@172.17.1.28/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@172.17.1.20/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@172.17.1.28/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@172.17.1.20/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@172.17.1.28/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@172.17.1.20/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@172.17.1.28/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@172.17.1.20/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@172.17.1.28/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.20/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.28/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.20/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.28/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@172.17.1.20/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@172.17.1.28/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@172.17.1.20/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@172.17.1.28/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@172.17.1.20/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@172.17.1.28/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@172.17.1.20/panko.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@172.17.1.28/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@172.17.1.20] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@172.17.1.28] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@172.17.1.20/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@172.17.1.28/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@172.17.1.20/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@172.17.1.28/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@172.17.1.20/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@172.17.1.28/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@172.17.1.20/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@172.17.1.28/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@172.17.1.20/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@172.17.1.28/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@172.17.1.20/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@172.17.1.28/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@172.17.1.20/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@172.17.1.28/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.20/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.28/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.20/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.28/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@172.17.1.20/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@172.17.1.28/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@172.17.1.20/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@172.17.1.28/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@172.17.1.20/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@172.17.1.28/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@172.17.1.20/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@172.17.1.28/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@172.17.1.20] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@172.17.1.20/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@172.17.1.28/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@172.17.1.20/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@172.17.1.28/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@172.17.1.20/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@172.17.1.28/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@172.17.1.20/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@172.17.1.28/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@172.17.1.20/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@172.17.1.28/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@172.17.1.20/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@172.17.1.28/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@172.17.1.20/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@172.17.1.28/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.20/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.28/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.20/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.28/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@172.17.1.20/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@172.17.1.28/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@172.17.1.20/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@172.17.1.28/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@172.17.1.20/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@172.17.1.28/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@172.17.1.20/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@172.17.1.28/panko.*] with 'before'", > "Debug: Adding relationship from Anchor[mysql::client::start] to Class[Mysql::Client::Install] with 'before'", > "Debug: Adding relationship from Class[Mysql::Client::Install] to Anchor[mysql::client::end] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[aodh] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[aodh] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[cinder] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[cinder] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[glance] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[glance] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[gnocchi] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[gnocchi] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[heat] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[heat] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[keystone] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[keystone] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[ovs_neutron] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[ovs_neutron] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_cell0] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_cell0] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_api] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_api] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_placement] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_placement] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[sahara] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[sahara] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[panko] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[panko] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@%] to Mysql_grant[aodh@%/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@172.17.1.20] to Mysql_grant[aodh@172.17.1.20/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@172.17.1.28] to Mysql_grant[aodh@172.17.1.28/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@%] to Mysql_grant[cinder@%/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@172.17.1.20] to Mysql_grant[cinder@172.17.1.20/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@172.17.1.28] to Mysql_grant[cinder@172.17.1.28/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@%] to Mysql_grant[glance@%/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@172.17.1.20] to Mysql_grant[glance@172.17.1.20/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@172.17.1.28] to Mysql_grant[glance@172.17.1.28/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@%] to Mysql_grant[gnocchi@%/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@172.17.1.20] to Mysql_grant[gnocchi@172.17.1.20/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@172.17.1.28] to Mysql_grant[gnocchi@172.17.1.28/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@%] to Mysql_grant[heat@%/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@172.17.1.20] to Mysql_grant[heat@172.17.1.20/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@172.17.1.28] to Mysql_grant[heat@172.17.1.28/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@%] to Mysql_grant[keystone@%/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@172.17.1.20] to Mysql_grant[keystone@172.17.1.20/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@172.17.1.28] to Mysql_grant[keystone@172.17.1.28/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@%] to Mysql_grant[neutron@%/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@172.17.1.20] to Mysql_grant[neutron@172.17.1.20/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@172.17.1.28] to Mysql_grant[neutron@172.17.1.28/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@%] to Mysql_grant[nova@%/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.20] to Mysql_grant[nova@172.17.1.20/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.28] to Mysql_grant[nova@172.17.1.28/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@%] to Mysql_grant[nova@%/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.20] to Mysql_grant[nova@172.17.1.20/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.28] to Mysql_grant[nova@172.17.1.28/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@%] to Mysql_grant[nova_api@%/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@172.17.1.20] to Mysql_grant[nova_api@172.17.1.20/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@172.17.1.28] to Mysql_grant[nova_api@172.17.1.28/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@%] to Mysql_grant[nova_placement@%/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@172.17.1.20] to Mysql_grant[nova_placement@172.17.1.20/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@172.17.1.28] to Mysql_grant[nova_placement@172.17.1.28/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@%] to Mysql_grant[sahara@%/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@172.17.1.20] to Mysql_grant[sahara@172.17.1.20/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@172.17.1.28] to Mysql_grant[sahara@172.17.1.28/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@%] to Mysql_grant[panko@%/panko.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@172.17.1.20] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@172.17.1.20] to Mysql_grant[panko@172.17.1.20/panko.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@172.17.1.28] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@172.17.1.28] to Mysql_grant[panko@172.17.1.28/panko.*] with 'notify'", > "Debug: File[mysql-config-file]: Adding default for owner", > "Debug: File[mysql-config-file]: Adding default for group", > "Debug: File[/etc/my.cnf.d]: Adding default for owner", > "Debug: File[/etc/my.cnf.d]: Adding default for group", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.62 seconds", > "Info: Applying configuration version '1538484300'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[galera]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-galera-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@172.17.1.20/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@172.17.1.28/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@172.17.1.20/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@172.17.1.28/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@172.17.1.20/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@172.17.1.28/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@172.17.1.20/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@172.17.1.28/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@172.17.1.20/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@172.17.1.28/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@172.17.1.20/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@172.17.1.28/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@172.17.1.20/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@172.17.1.28/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.20/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.28/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.20/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.28/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@172.17.1.20/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@172.17.1.28/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@172.17.1.20/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@172.17.1.28/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@172.17.1.20/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@172.17.1.28/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@172.17.1.20/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@172.17.1.28/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@172.17.1.20/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@172.17.1.28/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@172.17.1.20/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@172.17.1.28/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@172.17.1.20/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@172.17.1.28/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@172.17.1.20/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@172.17.1.28/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@172.17.1.20/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@172.17.1.28/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@172.17.1.20/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@172.17.1.28/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@172.17.1.20/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@172.17.1.28/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.20/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.28/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.20/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.28/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@172.17.1.20/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@172.17.1.28/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@172.17.1.20/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@172.17.1.28/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@172.17.1.20/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@172.17.1.28/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@172.17.1.20/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@172.17.1.28/panko.*]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Server::Config/before: subscribes to Class[Mysql::Server::Binarylog]", > "Debug: /Stage[main]/Mysql::Server::Install/before: subscribes to Class[Mysql::Server::Config]", > "Debug: /Stage[main]/Mysql::Server::Binarylog/before: subscribes to Class[Mysql::Server::Installdb]", > "Debug: /Stage[main]/Mysql::Server::Installdb/before: subscribes to Class[Mysql::Server::Service]", > "Debug: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/require: subscribes to Mysql_datadir[/var/lib/mysql]", > "Debug: /Stage[main]/Mysql::Server::Service/before: subscribes to Class[Mysql::Server::Root_password]", > "Debug: /Stage[main]/Mysql::Server::Root_password/before: subscribes to Class[Mysql::Server::Providers]", > "Debug: /Stage[main]/Mysql::Server::Providers/before: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@%]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@localhost.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]/before: subscribes to Class[Mysql::Server::Install]", > "Debug: /Stage[main]/Aodh::Db::Mysql/notify: subscribes to Anchor[aodh::db::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]/before: subscribes to Anchor[aodh::config::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]/before: subscribes to Anchor[aodh::db::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]/before: subscribes to Anchor[aodh::db::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]/notify: subscribes to Class[Aodh::Db::Mysql]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]/notify: subscribes to Anchor[aodh::dbsync::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]/before: subscribes to Anchor[aodh::dbsync::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Cinder::Db::Mysql/notify: subscribes to Anchor[cinder::db::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/before: subscribes to Anchor[cinder::config::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/before: subscribes to Anchor[cinder::db::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]/before: subscribes to Anchor[cinder::db::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]/notify: subscribes to Class[Cinder::Db::Mysql]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]/notify: subscribes to Anchor[cinder::dbsync::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]/before: subscribes to Anchor[cinder::dbsync::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Glance::Db::Mysql/notify: subscribes to Anchor[glance::db::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]/before: subscribes to Anchor[glance::config::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]/before: subscribes to Anchor[glance::db::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]/before: subscribes to Anchor[glance::db::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]/notify: subscribes to Class[Glance::Db::Mysql]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]/notify: subscribes to Anchor[glance::dbsync::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]/before: subscribes to Anchor[glance::dbsync::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/notify: subscribes to Anchor[gnocchi::db::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]/before: subscribes to Anchor[gnocchi::config::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]/before: subscribes to Anchor[gnocchi::db::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]/before: subscribes to Anchor[gnocchi::db::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]/notify: subscribes to Class[Gnocchi::Db::Mysql]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]/notify: subscribes to Anchor[gnocchi::dbsync::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]/before: subscribes to Anchor[gnocchi::dbsync::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Heat::Db::Mysql/notify: subscribes to Anchor[heat::db::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]/before: subscribes to Anchor[heat::config::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]/before: subscribes to Anchor[heat::db::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]/before: subscribes to Anchor[heat::db::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]/notify: subscribes to Class[Heat::Db::Mysql]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]/notify: subscribes to Anchor[heat::dbsync::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]/before: subscribes to Anchor[heat::dbsync::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Keystone::Db::Mysql/notify: subscribes to Anchor[keystone::db::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]/before: subscribes to Anchor[keystone::config::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]/before: subscribes to Anchor[keystone::db::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]/before: subscribes to Anchor[keystone::db::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]/notify: subscribes to Class[Keystone::Db::Mysql]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]/notify: subscribes to Anchor[keystone::dbsync::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]/before: subscribes to Anchor[keystone::dbsync::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Neutron::Db::Mysql/notify: subscribes to Anchor[neutron::db::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]/before: subscribes to Anchor[neutron::config::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]/before: subscribes to Anchor[neutron::db::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]/before: subscribes to Anchor[neutron::db::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]/notify: subscribes to Class[Neutron::Db::Mysql]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]/notify: subscribes to Anchor[neutron::dbsync::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]/before: subscribes to Anchor[neutron::dbsync::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Nova::Db::Mysql/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]/before: subscribes to Anchor[nova::config::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]/before: subscribes to Anchor[nova::db::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/before: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql_api]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql_placement]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]/subscribe: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]/before: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/subscribe: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/before: subscribes to Anchor[nova::dbsync::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]/notify: subscribes to Anchor[nova::cell_v2::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]/notify: subscribes to Anchor[nova::dbsync::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]/before: subscribes to Anchor[nova::db_online_data_migrations::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Sahara::Db::Mysql/notify: subscribes to Anchor[sahara::db::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]/before: subscribes to Anchor[sahara::config::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]/before: subscribes to Anchor[sahara::db::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]/before: subscribes to Anchor[sahara::db::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]/notify: subscribes to Class[Sahara::Db::Mysql]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]/notify: subscribes to Anchor[sahara::dbsync::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]/before: subscribes to Anchor[sahara::dbsync::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Panko::Db::Mysql/notify: subscribes to Anchor[panko::db::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]/before: subscribes to Anchor[panko::config::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]/before: subscribes to Anchor[panko::db::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]/before: subscribes to Anchor[panko::db::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]/notify: subscribes to Class[Panko::Db::Mysql]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]/notify: subscribes to Anchor[panko::dbsync::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]/before: subscribes to Anchor[panko::dbsync::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/require: subscribes to Class[Mysql::Server]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/require: subscribes to Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/before: subscribes to Exec[galera-ready]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@172.17.1.20]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@172.17.1.28]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@172.17.1.20/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@172.17.1.28/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@172.17.1.20/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@172.17.1.28/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@172.17.1.20/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@172.17.1.28/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@172.17.1.20/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@172.17.1.28/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@172.17.1.20/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@172.17.1.28/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@172.17.1.20/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@172.17.1.28/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@172.17.1.20/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@172.17.1.28/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.20/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.28/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.20/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.28/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@172.17.1.20/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@172.17.1.28/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@172.17.1.20/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@172.17.1.28/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@172.17.1.20/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@172.17.1.28/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@172.17.1.20/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@172.17.1.28/panko.*]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Client::Install/before: subscribes to Anchor[mysql::client::end]", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]/before: subscribes to Class[Mysql::Client::Install]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@172.17.1.20]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@172.17.1.28]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@172.17.1.20]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@172.17.1.28]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@172.17.1.20]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@172.17.1.28]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@172.17.1.20]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@172.17.1.28]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@172.17.1.20]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@172.17.1.28]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@172.17.1.20]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@172.17.1.28]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@172.17.1.20]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@172.17.1.28]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@172.17.1.20]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@172.17.1.28]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@172.17.1.20]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@172.17.1.28]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@172.17.1.20]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@172.17.1.28]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@172.17.1.20]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@172.17.1.28]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@172.17.1.20]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@172.17.1.28]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]/notify: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20]/Mysql_user[aodh@172.17.1.20]/notify: subscribes to Mysql_grant[aodh@172.17.1.20/aodh.*]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28]/Mysql_user[aodh@172.17.1.28]/notify: subscribes to Mysql_grant[aodh@172.17.1.28/aodh.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]/notify: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20]/Mysql_user[cinder@172.17.1.20]/notify: subscribes to Mysql_grant[cinder@172.17.1.20/cinder.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28]/Mysql_user[cinder@172.17.1.28]/notify: subscribes to Mysql_grant[cinder@172.17.1.28/cinder.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]/notify: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20]/Mysql_user[glance@172.17.1.20]/notify: subscribes to Mysql_grant[glance@172.17.1.20/glance.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28]/Mysql_user[glance@172.17.1.28]/notify: subscribes to Mysql_grant[glance@172.17.1.28/glance.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]/notify: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20]/Mysql_user[gnocchi@172.17.1.20]/notify: subscribes to Mysql_grant[gnocchi@172.17.1.20/gnocchi.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28]/Mysql_user[gnocchi@172.17.1.28]/notify: subscribes to Mysql_grant[gnocchi@172.17.1.28/gnocchi.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/notify: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20]/Mysql_user[heat@172.17.1.20]/notify: subscribes to Mysql_grant[heat@172.17.1.20/heat.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28]/Mysql_user[heat@172.17.1.28]/notify: subscribes to Mysql_grant[heat@172.17.1.28/heat.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]/notify: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20]/Mysql_user[keystone@172.17.1.20]/notify: subscribes to Mysql_grant[keystone@172.17.1.20/keystone.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28]/Mysql_user[keystone@172.17.1.28]/notify: subscribes to Mysql_grant[keystone@172.17.1.28/keystone.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]/notify: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20]/Mysql_user[neutron@172.17.1.20]/notify: subscribes to Mysql_grant[neutron@172.17.1.20/ovs_neutron.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28]/Mysql_user[neutron@172.17.1.28]/notify: subscribes to Mysql_grant[neutron@172.17.1.28/ovs_neutron.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/notify: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/notify: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20]/Mysql_user[nova@172.17.1.20]/notify: subscribes to Mysql_grant[nova@172.17.1.20/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20]/Mysql_user[nova@172.17.1.20]/notify: subscribes to Mysql_grant[nova@172.17.1.20/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28]/Mysql_user[nova@172.17.1.28]/notify: subscribes to Mysql_grant[nova@172.17.1.28/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28]/Mysql_user[nova@172.17.1.28]/notify: subscribes to Mysql_grant[nova@172.17.1.28/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]/notify: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20]/Mysql_user[nova_api@172.17.1.20]/notify: subscribes to Mysql_grant[nova_api@172.17.1.20/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28]/Mysql_user[nova_api@172.17.1.28]/notify: subscribes to Mysql_grant[nova_api@172.17.1.28/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]/notify: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20]/Mysql_user[nova_placement@172.17.1.20]/notify: subscribes to Mysql_grant[nova_placement@172.17.1.20/nova_placement.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28]/Mysql_user[nova_placement@172.17.1.28]/notify: subscribes to Mysql_grant[nova_placement@172.17.1.28/nova_placement.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]/notify: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20]/Mysql_user[sahara@172.17.1.20]/notify: subscribes to Mysql_grant[sahara@172.17.1.20/sahara.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28]/Mysql_user[sahara@172.17.1.28]/notify: subscribes to Mysql_grant[sahara@172.17.1.28/sahara.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]/notify: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20]/Mysql_user[panko@172.17.1.20]/notify: subscribes to Mysql_grant[panko@172.17.1.20/panko.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28]/Mysql_user[panko@172.17.1.28]/notify: subscribes to Mysql_grant[panko@172.17.1.28/panko.*]", > "Debug: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Adding autorequire relationship with File[/etc/my.cnf.d]", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Adding autorequire relationship with Package[mysql-server]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}a730a65a0efef3097d49f2084ff2db3e'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}76a4e05ad880b930b43fc47f1d505711'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Install/Package[mysql-server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Install/Package[mysql-server]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Config]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /etc/my.cnf.d/galera.cnf", > "Info: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Filebucketed /etc/my.cnf.d/galera.cnf to puppet with sum af90358207ccfecae7af249d5ef7dd3e", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}be8dfdd5a4076d5f39de0ce6aecd87bf'", > "Debug: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: The container Class[Mysql::Server::Config] will propagate my refresh event", > "Info: Class[Mysql::Server::Config]: Unscheduling all events on Class[Mysql::Server::Config]", > "Debug: Class[Mysql::Server::Binarylog]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Binarylog]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Installdb]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Installdb]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Debug: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]: The container Class[Mysql::Server::Installdb] will propagate my refresh event", > "Info: Class[Mysql::Server::Installdb]: Unscheduling all events on Class[Mysql::Server::Installdb]", > "Debug: Class[Mysql::Server::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Service]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Root_password]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Root_password]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Root_password/Exec[remove install pass]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Root_password/Exec[remove install pass]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Providers]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Providers]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Account_security]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Account_security]: Resource is being skipped, unscheduling all events", > "Debug: Class[Aodh::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Aodh::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Aodh::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Aodh::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[aodh]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Cinder::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Cinder::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Class[Glance::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Glance::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Glance::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Glance::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[glance]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[glance]: Resource is being skipped, unscheduling all events", > "Debug: Class[Gnocchi::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Gnocchi::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Gnocchi::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Gnocchi::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[gnocchi]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Class[Heat::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Heat::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Heat::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Heat::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[heat]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[heat]: Resource is being skipped, unscheduling all events", > "Debug: Class[Keystone::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Keystone::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Keystone::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Keystone::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[keystone]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Class[Neutron::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Neutron::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Neutron::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Neutron::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[neutron]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_cell0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_cell0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql_api]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql_api]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_api]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql_placement]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql_placement]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_placement]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Class[Sahara::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Sahara::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Sahara::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Sahara::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[sahara]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Class[Panko::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Panko::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Panko::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Panko::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[panko]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[panko]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[galera-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Property[galera-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-11rkk95 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-11rkk95 property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: Class[Mysql::Client]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Client]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Client::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Client::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client::Install/Package[mysql_client]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client::Install/Package[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ap5tv5 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ap5tv5 property show | grep galera-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep galera-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-qq16l8 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-qq16l8 property set --node controller-0 galera-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-qq16l8 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-qq16l8.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 galera-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/Pcmk_property[property-controller-0-galera-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/Pcmk_property[property-controller-0-galera-role]: The container Pacemaker::Property[galera-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[galera-role-controller-0]: Unscheduling all events on Pacemaker::Property[galera-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[galera-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Resource::Bundle[galera-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-bvlujs returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-bvlujs constraint list | grep location-galera-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1bm3rxe returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1bm3rxe resource show galera-bundle > /dev/null 2>&1", > "Debug: Exists: bundle galera-bundle exists 1 location exists 1 deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1oi64l8 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1oi64l8 resource bundle create galera-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest replicas=1 masters=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=mysql-cfg-files source-dir=/var/lib/kolla/config_files/mysql.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=mysql-cfg-data source-dir=/var/lib/config-data/puppet-generated/mysql/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=mysql-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=mysql-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=mysql-lib source-dir=/var/lib/mysql target-dir=/var/lib/mysql options=rw storage-map id=mysql-log-mariadb source-dir=/var/log/mariadb target-dir=/var/log/mariadb options=rw storage-map id=mysql-log source-dir=/var/log/containers/mysql target-dir=/var/log/mysql options=rw storage-map id=mysql-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3123 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1oi64l8 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1oi64l8.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: location_rule_create: constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-17z4fdy returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-17z4fdy constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-17z4fdy diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-17z4fdy.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-155sccv returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-155sccv resource enable galera-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-155sccv diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-155sccv.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Bundle[galera-bundle]/Pcmk_bundle[galera-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Bundle[galera-bundle]/Pcmk_bundle[galera-bundle]: The container Pacemaker::Resource::Bundle[galera-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[galera-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: Pacemaker::Resource::Ocf[galera]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Resource::Ocf[galera]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1hns76j returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1hns76j constraint list | grep location-galera-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-uydjm8 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-uydjm8 resource show galera > /dev/null 2>&1", > "Debug: Exists: resource galera exists 1 location exists 0 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-2ekcw returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-2ekcw resource create galera ocf:heartbeat:galera log='/var/log/mysql/mysqld.log' additional_parameters='--open-files-limit=16384' enable_creation=true wsrep_cluster_address='gcomm://controller-0.internalapi.localdomain' cluster_host_map='controller-0:controller-0.internalapi.localdomain' meta master-max=1 ordered=true container-attribute-target=host op promote timeout=300s on-fail=block bundle galera-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-2ekcw diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-2ekcw.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/Pcmk_resource[galera]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/Pcmk_resource[galera]: The container Pacemaker::Resource::Ocf[galera] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[galera]: Unscheduling all events on Pacemaker::Resource::Ocf[galera]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 1/180", > "Debug: Exec[galera-ready](provider=posix): Executing '/usr/bin/clustercheck >/dev/null'", > "Debug: Executing: '/usr/bin/clustercheck >/dev/null'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Sleeping for 10 seconds between tries", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 2/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 3/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 4/180", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Info: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Unscheduling all events on Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]", > "Debug: Prefetching mysql resources for mysql_user", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@%''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@127.0.0.1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@::1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@controller-0''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'clustercheck@localhost''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@localhost''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'127.0.0.1''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'::1''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@%]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@localhost.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'controller-0''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: Prefetching mysql resources for mysql_database", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show databases'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' information_schema'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' mysql'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' performance_schema'", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]: Nothing to manage: no ensure and the resource doesn't exist", > "Info: Class[Mysql::Server::Account_security]: Unscheduling all events on Class[Mysql::Server::Account_security]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `aodh` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]: The container Openstacklib::Db::Mysql[aodh] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `cinder` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]: The container Openstacklib::Db::Mysql[cinder] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `glance` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]: The container Openstacklib::Db::Mysql[glance] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `gnocchi` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]: The container Openstacklib::Db::Mysql[gnocchi] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `heat` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]: The container Openstacklib::Db::Mysql[heat] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `keystone` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]: The container Openstacklib::Db::Mysql[keystone] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `ovs_neutron` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]: The container Openstacklib::Db::Mysql[neutron] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]: The container Openstacklib::Db::Mysql[nova] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_cell0` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Mysql_database[nova_cell0]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Mysql_database[nova_cell0]: The container Openstacklib::Db::Mysql[nova_cell0] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_api` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]: The container Openstacklib::Db::Mysql[nova_api] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_placement` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]: The container Openstacklib::Db::Mysql[nova_placement] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `sahara` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]: The container Openstacklib::Db::Mysql[sahara] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `panko` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]: The container Openstacklib::Db::Mysql[panko] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'%' IDENTIFIED BY PASSWORD '*54DB8F030FE3DFD0E3C81629273E1979C2811757''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]: The container Openstacklib::Db::Mysql::Host_access[aodh_%] will propagate my refresh event", > "Debug: Prefetching mysql resources for mysql_grant", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'aodh'@'%';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'root'@'%';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'clustercheck'@'localhost';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'root'@'localhost';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'%''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_grant[aodh@%/aodh.*]/ensure: created", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe FLUSH PRIVILEGES'", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_grant[aodh@%/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'172.17.1.20' IDENTIFIED BY PASSWORD '*54DB8F030FE3DFD0E3C81629273E1979C2811757''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20]/Mysql_user[aodh@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20]/Mysql_user[aodh@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'172.17.1.20''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20]/Mysql_grant[aodh@172.17.1.20/aodh.*]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20]/Mysql_grant[aodh@172.17.1.20/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'172.17.1.28' IDENTIFIED BY PASSWORD '*54DB8F030FE3DFD0E3C81629273E1979C2811757''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28]/Mysql_user[aodh@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28]/Mysql_user[aodh@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'172.17.1.28''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28]/Mysql_grant[aodh@172.17.1.28/aodh.*]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28]/Mysql_grant[aodh@172.17.1.28/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[aodh]: Unscheduling all events on Openstacklib::Db::Mysql[aodh]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'%' IDENTIFIED BY PASSWORD '*38F217A796F373E82C261D1D7FACE80021E2BA65''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]: The container Openstacklib::Db::Mysql::Host_access[cinder_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'%''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_grant[cinder@%/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_grant[cinder@%/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'172.17.1.20' IDENTIFIED BY PASSWORD '*38F217A796F373E82C261D1D7FACE80021E2BA65''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20]/Mysql_user[cinder@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20]/Mysql_user[cinder@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'172.17.1.20''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20]/Mysql_grant[cinder@172.17.1.20/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20]/Mysql_grant[cinder@172.17.1.20/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'172.17.1.28' IDENTIFIED BY PASSWORD '*38F217A796F373E82C261D1D7FACE80021E2BA65''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28]/Mysql_user[cinder@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28]/Mysql_user[cinder@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'172.17.1.28''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28]/Mysql_grant[cinder@172.17.1.28/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28]/Mysql_grant[cinder@172.17.1.28/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[cinder]: Unscheduling all events on Openstacklib::Db::Mysql[cinder]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'%' IDENTIFIED BY PASSWORD '*21D639F86D5D1083B5BA305221981AB6C1B743C9''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]: The container Openstacklib::Db::Mysql::Host_access[glance_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'%''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_grant[glance@%/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_grant[glance@%/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'172.17.1.20' IDENTIFIED BY PASSWORD '*21D639F86D5D1083B5BA305221981AB6C1B743C9''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20]/Mysql_user[glance@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20]/Mysql_user[glance@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'172.17.1.20''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20]/Mysql_grant[glance@172.17.1.20/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20]/Mysql_grant[glance@172.17.1.20/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'172.17.1.28' IDENTIFIED BY PASSWORD '*21D639F86D5D1083B5BA305221981AB6C1B743C9''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28]/Mysql_user[glance@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28]/Mysql_user[glance@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'172.17.1.28''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28]/Mysql_grant[glance@172.17.1.28/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28]/Mysql_grant[glance@172.17.1.28/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[glance]: Unscheduling all events on Openstacklib::Db::Mysql[glance]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'%' IDENTIFIED BY PASSWORD '*C2F1FD3A02B9831BDEEF56E29F25745A5F73D3CF''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'%''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_grant[gnocchi@%/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_grant[gnocchi@%/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'172.17.1.20' IDENTIFIED BY PASSWORD '*C2F1FD3A02B9831BDEEF56E29F25745A5F73D3CF''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20]/Mysql_user[gnocchi@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20]/Mysql_user[gnocchi@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'172.17.1.20''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20]/Mysql_grant[gnocchi@172.17.1.20/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20]/Mysql_grant[gnocchi@172.17.1.20/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'172.17.1.28' IDENTIFIED BY PASSWORD '*C2F1FD3A02B9831BDEEF56E29F25745A5F73D3CF''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28]/Mysql_user[gnocchi@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28]/Mysql_user[gnocchi@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'172.17.1.28''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28]/Mysql_grant[gnocchi@172.17.1.28/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28]/Mysql_grant[gnocchi@172.17.1.28/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[gnocchi]: Unscheduling all events on Openstacklib::Db::Mysql[gnocchi]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'%' IDENTIFIED BY PASSWORD '*94D6E2486F96E971EBA48C2850A59A1DA419E8A6''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]: The container Openstacklib::Db::Mysql::Host_access[heat_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'%''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_grant[heat@%/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_grant[heat@%/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'172.17.1.20' IDENTIFIED BY PASSWORD '*94D6E2486F96E971EBA48C2850A59A1DA419E8A6''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20]/Mysql_user[heat@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20]/Mysql_user[heat@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'172.17.1.20''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20]/Mysql_grant[heat@172.17.1.20/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20]/Mysql_grant[heat@172.17.1.20/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'172.17.1.28' IDENTIFIED BY PASSWORD '*94D6E2486F96E971EBA48C2850A59A1DA419E8A6''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28]/Mysql_user[heat@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28]/Mysql_user[heat@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'172.17.1.28''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28]/Mysql_grant[heat@172.17.1.28/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28]/Mysql_grant[heat@172.17.1.28/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[heat]: Unscheduling all events on Openstacklib::Db::Mysql[heat]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'%' IDENTIFIED BY PASSWORD '*1E0B801834FA074128BB16AA6126AAE97D9FC5D1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]: The container Openstacklib::Db::Mysql::Host_access[keystone_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'%''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_grant[keystone@%/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_grant[keystone@%/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'172.17.1.20' IDENTIFIED BY PASSWORD '*1E0B801834FA074128BB16AA6126AAE97D9FC5D1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20]/Mysql_user[keystone@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20]/Mysql_user[keystone@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'172.17.1.20''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20]/Mysql_grant[keystone@172.17.1.20/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20]/Mysql_grant[keystone@172.17.1.20/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'172.17.1.28' IDENTIFIED BY PASSWORD '*1E0B801834FA074128BB16AA6126AAE97D9FC5D1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28]/Mysql_user[keystone@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28]/Mysql_user[keystone@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'172.17.1.28''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28]/Mysql_grant[keystone@172.17.1.28/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28]/Mysql_grant[keystone@172.17.1.28/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[keystone]: Unscheduling all events on Openstacklib::Db::Mysql[keystone]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'%' IDENTIFIED BY PASSWORD '*AC29B82D21D7D0C33689D818B7B889E50943F2C1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'%''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_grant[neutron@%/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_grant[neutron@%/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'172.17.1.20' IDENTIFIED BY PASSWORD '*AC29B82D21D7D0C33689D818B7B889E50943F2C1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20]/Mysql_user[neutron@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20]/Mysql_user[neutron@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'172.17.1.20''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20]/Mysql_grant[neutron@172.17.1.20/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20]/Mysql_grant[neutron@172.17.1.20/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'172.17.1.28' IDENTIFIED BY PASSWORD '*AC29B82D21D7D0C33689D818B7B889E50943F2C1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28]/Mysql_user[neutron@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28]/Mysql_user[neutron@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'172.17.1.28''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28]/Mysql_grant[neutron@172.17.1.28/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28]/Mysql_grant[neutron@172.17.1.28/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[neutron]: Unscheduling all events on Openstacklib::Db::Mysql[neutron]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'%' IDENTIFIED BY PASSWORD '*8C5F88DCAB919BCCABA8039F60386126AC035B88''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]: The container Openstacklib::Db::Mysql::Host_access[nova_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_grant[nova@%/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_grant[nova@%/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'172.17.1.20' IDENTIFIED BY PASSWORD '*8C5F88DCAB919BCCABA8039F60386126AC035B88''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20]/Mysql_user[nova@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20]/Mysql_user[nova@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'172.17.1.20''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20]/Mysql_grant[nova@172.17.1.20/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20]/Mysql_grant[nova@172.17.1.20/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'172.17.1.28' IDENTIFIED BY PASSWORD '*8C5F88DCAB919BCCABA8039F60386126AC035B88''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28]/Mysql_user[nova@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28]/Mysql_user[nova@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'172.17.1.28''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28]/Mysql_grant[nova@172.17.1.28/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28]/Mysql_grant[nova@172.17.1.28/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[nova]: Unscheduling all events on Openstacklib::Db::Mysql[nova]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_%]/Mysql_grant[nova@%/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_%]/Mysql_grant[nova@%/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'172.17.1.20''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.20]/Mysql_grant[nova@172.17.1.20/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.20]/Mysql_grant[nova@172.17.1.20/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'172.17.1.28''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.28]/Mysql_grant[nova@172.17.1.28/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.28]/Mysql_grant[nova@172.17.1.28/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[nova_cell0]: Unscheduling all events on Openstacklib::Db::Mysql[nova_cell0]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'%' IDENTIFIED BY PASSWORD '*8C5F88DCAB919BCCABA8039F60386126AC035B88''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]: The container Openstacklib::Db::Mysql::Host_access[nova_api_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_grant[nova_api@%/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_grant[nova_api@%/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'172.17.1.20' IDENTIFIED BY PASSWORD '*8C5F88DCAB919BCCABA8039F60386126AC035B88''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20]/Mysql_user[nova_api@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20]/Mysql_user[nova_api@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'172.17.1.20''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20]/Mysql_grant[nova_api@172.17.1.20/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20]/Mysql_grant[nova_api@172.17.1.20/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'172.17.1.28' IDENTIFIED BY PASSWORD '*8C5F88DCAB919BCCABA8039F60386126AC035B88''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28]/Mysql_user[nova_api@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28]/Mysql_user[nova_api@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'172.17.1.28''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28]/Mysql_grant[nova_api@172.17.1.28/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28]/Mysql_grant[nova_api@172.17.1.28/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[nova_api]: Unscheduling all events on Openstacklib::Db::Mysql[nova_api]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'%' IDENTIFIED BY PASSWORD '*8C5F88DCAB919BCCABA8039F60386126AC035B88''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_grant[nova_placement@%/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_grant[nova_placement@%/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'172.17.1.20' IDENTIFIED BY PASSWORD '*8C5F88DCAB919BCCABA8039F60386126AC035B88''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20]/Mysql_user[nova_placement@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20]/Mysql_user[nova_placement@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'172.17.1.20''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20]/Mysql_grant[nova_placement@172.17.1.20/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20]/Mysql_grant[nova_placement@172.17.1.20/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'172.17.1.28' IDENTIFIED BY PASSWORD '*8C5F88DCAB919BCCABA8039F60386126AC035B88''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28]/Mysql_user[nova_placement@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28]/Mysql_user[nova_placement@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'172.17.1.28''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28]/Mysql_grant[nova_placement@172.17.1.28/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28]/Mysql_grant[nova_placement@172.17.1.28/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[nova_placement]: Unscheduling all events on Openstacklib::Db::Mysql[nova_placement]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'%' IDENTIFIED BY PASSWORD '*8F18E1B0D45064D158D3609BADFE228B01CD30A0''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]: The container Openstacklib::Db::Mysql::Host_access[sahara_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'%''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_grant[sahara@%/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_grant[sahara@%/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'172.17.1.20' IDENTIFIED BY PASSWORD '*8F18E1B0D45064D158D3609BADFE228B01CD30A0''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20]/Mysql_user[sahara@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20]/Mysql_user[sahara@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'172.17.1.20''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20]/Mysql_grant[sahara@172.17.1.20/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20]/Mysql_grant[sahara@172.17.1.20/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'172.17.1.28' IDENTIFIED BY PASSWORD '*8F18E1B0D45064D158D3609BADFE228B01CD30A0''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28]/Mysql_user[sahara@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28]/Mysql_user[sahara@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'172.17.1.28''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28]/Mysql_grant[sahara@172.17.1.28/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28]/Mysql_grant[sahara@172.17.1.28/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[sahara]: Unscheduling all events on Openstacklib::Db::Mysql[sahara]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'%' IDENTIFIED BY PASSWORD '*9D4193144B96D20FBF45F121DF32C5069293A24E''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]: The container Openstacklib::Db::Mysql::Host_access[panko_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'%''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_grant[panko@%/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_grant[panko@%/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'172.17.1.20' IDENTIFIED BY PASSWORD '*9D4193144B96D20FBF45F121DF32C5069293A24E''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.20' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.20' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20]/Mysql_user[panko@172.17.1.20]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20]/Mysql_user[panko@172.17.1.20]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'172.17.1.20''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20]/Mysql_grant[panko@172.17.1.20/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20]/Mysql_grant[panko@172.17.1.20/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_172.17.1.20]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'172.17.1.28' IDENTIFIED BY PASSWORD '*9D4193144B96D20FBF45F121DF32C5069293A24E''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.28' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.28' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28]/Mysql_user[panko@172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28]/Mysql_user[panko@172.17.1.28]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'172.17.1.28''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28]/Mysql_grant[panko@172.17.1.28/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28]/Mysql_grant[panko@172.17.1.28/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_172.17.1.28]", > "Info: Openstacklib::Db::Mysql[panko]: Unscheduling all events on Openstacklib::Db::Mysql[panko]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Finishing transaction 44261080", > "Debug: Stored state in 0.01 seconds", > "Notice: Applied catalog in 76.71 seconds", > " Total: 103", > " Success: 103", > " Changed: 103", > " Out of sync: 103", > " Skipped: 136", > " Total: 250", > " File: 0.04", > " Mysql database: 0.17", > " Mysql grant: 1.17", > " Mysql user: 1.42", > " Pcmk resource: 11.31", > " Last run: 1538484382", > " Pcmk bundle: 19.44", > " Exec: 32.27", > " Config retrieval: 4.97", > " Total: 80.40", > " Pcmk property: 9.62", > " Config: 1538484300", > "Debug: Finishing transaction 53772400", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle'", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"unknown\", 1]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 57]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "stdout: Info: Loading facts", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.31 seconds", > "Info: Applying configuration version '1538484390'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]/File[/var/lib/neutron/l3_haproxy_wrapper]/ensure: defined content as '{md5}1f78cb7b3179c349cbf061c9b87a9eb2'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]/File[/var/lib/neutron/keepalived_wrapper]/ensure: defined content as '{md5}849083fe22c6b4b1f6c7366e81977810'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]/File[/var/lib/neutron/keepalived_state_change_wrapper]/ensure: defined content as '{md5}f72bfec5dc1c16b968223450454f78bf'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]/File[/var/lib/neutron/dibbler_wrapper]/ensure: defined content as '{md5}f8b78037c76463ae88136f6cfb9f0ade'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]", > "Notice: Applied catalog in 0.02 seconds", > " Total: 4", > " Success: 4", > " Total: 11", > " Out of sync: 4", > " Changed: 4", > " Skipped: 7", > " File: 0.01", > " Config retrieval: 0.40", > " Total: 0.41", > " Last run: 1538484390", > " Config: 1538484390", > "stderr: + STEP=4", > "+ TAGS=file", > "+ CONFIG='include ::tripleo::profile::base::neutron::l3_agent_wrappers'", > "+ EXTRA_ARGS=", > "+ echo '{\"step\": 4}'", > "+ puppet apply --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file -e 'include ::tripleo::profile::base::neutron::l3_agent_wrappers'", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.29 seconds", > "Info: Applying configuration version '1538484396'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::Dhcp_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]/File[/var/lib/neutron/dnsmasq_wrapper]/ensure: defined content as '{md5}6e5f95e9643847f816e00fdc019cdf2a'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::Dhcp_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]/File[/var/lib/neutron/dhcp_haproxy_wrapper]/ensure: defined content as '{md5}b757e60504e49cfe159829c7e73c0b63'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]", > "Notice: Applied catalog in 0.01 seconds", > " Total: 2", > " Success: 2", > " Changed: 2", > " Out of sync: 2", > " Total: 9", > " File: 0.00", > " Config retrieval: 0.38", > " Total: 0.38", > " Last run: 1538484397", > " Config: 1538484396", > "+ CONFIG='include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'", > "+ puppet apply --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file -e 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'", > "stderr: Error: unable to find resource 'redis-bundle'", > "stdout: c8a37e344d13316bc4e59e6f9182ef1ae40be1b73dabce188372f91a8fc25d48", > "stdout: fcadf803afc3050293db7eb977f86a274dec9cb6af35323ab1e3176fce8f4d07", > "stdout: f9935b9a3607962187b2aef5ff888206c094f39869b2fb0d711fe89f575bbe6d", > "stderr: Error: unable to find resource 'haproxy-bundle'", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/redis_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::database::redis_bundle from tripleo/profile/pacemaker/database/redis_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_docker_control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_network in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::extra_config_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_tunnel_local_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_tunnel_base_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_bind_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_fqdn in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_port in JSON backend", > "Debug: hiera(): Looking up redis_certificate_specs in JSON backend", > "Debug: hiera(): Looking up redis_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up redis_network in JSON backend", > "Debug: hiera(): Looking up redis_file_limit in JSON backend", > "Debug: importing '/etc/puppet/modules/redis/manifests/init.pp' in environment production", > "Debug: Automatically imported redis from redis into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/params.pp' in environment production", > "Debug: Automatically imported redis::params from redis/params into production", > "Debug: hiera(): Looking up redis::activerehashing in JSON backend", > "Debug: hiera(): Looking up redis::aof_load_truncated in JSON backend", > "Debug: hiera(): Looking up redis::aof_rewrite_incremental_fsync in JSON backend", > "Debug: hiera(): Looking up redis::appendfilename in JSON backend", > "Debug: hiera(): Looking up redis::appendfsync in JSON backend", > "Debug: hiera(): Looking up redis::appendonly in JSON backend", > "Debug: hiera(): Looking up redis::auto_aof_rewrite_min_size in JSON backend", > "Debug: hiera(): Looking up redis::auto_aof_rewrite_percentage in JSON backend", > "Debug: hiera(): Looking up redis::bind in JSON backend", > "Debug: hiera(): Looking up redis::output_buffer_limit_slave in JSON backend", > "Debug: hiera(): Looking up redis::output_buffer_limit_pubsub in JSON backend", > "Debug: hiera(): Looking up redis::conf_template in JSON backend", > "Debug: hiera(): Looking up redis::config_dir in JSON backend", > "Debug: hiera(): Looking up redis::config_dir_mode in JSON backend", > "Debug: hiera(): Looking up redis::config_file in JSON backend", > "Debug: hiera(): Looking up redis::config_file_mode in JSON backend", > "Debug: hiera(): Looking up redis::config_file_orig in JSON backend", > "Debug: hiera(): Looking up redis::config_group in JSON backend", > "Debug: hiera(): Looking up redis::config_owner in JSON backend", > "Debug: hiera(): Looking up redis::daemonize in JSON backend", > "Debug: hiera(): Looking up redis::databases in JSON backend", > "Debug: hiera(): Looking up redis::default_install in JSON backend", > "Debug: hiera(): Looking up redis::dbfilename in JSON backend", > "Debug: hiera(): Looking up redis::extra_config_file in JSON backend", > "Debug: hiera(): Looking up redis::hash_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::hash_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::hll_sparse_max_bytes in JSON backend", > "Debug: hiera(): Looking up redis::hz in JSON backend", > "Debug: hiera(): Looking up redis::latency_monitor_threshold in JSON backend", > "Debug: hiera(): Looking up redis::list_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::list_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::log_dir in JSON backend", > "Debug: hiera(): Looking up redis::log_dir_mode in JSON backend", > "Debug: hiera(): Looking up redis::log_file in JSON backend", > "Debug: hiera(): Looking up redis::log_level in JSON backend", > "Debug: hiera(): Looking up redis::manage_package in JSON backend", > "Debug: hiera(): Looking up redis::manage_repo in JSON backend", > "Debug: hiera(): Looking up redis::masterauth in JSON backend", > "Debug: hiera(): Looking up redis::maxclients in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory_policy in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory_samples in JSON backend", > "Debug: hiera(): Looking up redis::min_slaves_max_lag in JSON backend", > "Debug: hiera(): Looking up redis::min_slaves_to_write in JSON backend", > "Debug: hiera(): Looking up redis::no_appendfsync_on_rewrite in JSON backend", > "Debug: hiera(): Looking up redis::notify_keyspace_events in JSON backend", > "Debug: hiera(): Looking up redis::notify_service in JSON backend", > "Debug: hiera(): Looking up redis::managed_by_cluster_manager in JSON backend", > "Debug: hiera(): Looking up redis::package_ensure in JSON backend", > "Debug: hiera(): Looking up redis::package_name in JSON backend", > "Debug: hiera(): Looking up redis::pid_file in JSON backend", > "Debug: hiera(): Looking up redis::port in JSON backend", > "Debug: hiera(): Looking up redis::protected_mode in JSON backend", > "Debug: hiera(): Looking up redis::ppa_repo in JSON backend", > "Debug: hiera(): Looking up redis::rdbcompression in JSON backend", > "Debug: hiera(): Looking up redis::repl_backlog_size in JSON backend", > "Debug: hiera(): Looking up redis::repl_backlog_ttl in JSON backend", > "Debug: hiera(): Looking up redis::repl_disable_tcp_nodelay in JSON backend", > "Debug: hiera(): Looking up redis::repl_ping_slave_period in JSON backend", > "Debug: hiera(): Looking up redis::repl_timeout in JSON backend", > "Debug: hiera(): Looking up redis::requirepass in JSON backend", > "Debug: hiera(): Looking up redis::save_db_to_disk in JSON backend", > "Debug: hiera(): Looking up redis::save_db_to_disk_interval in JSON backend", > "Debug: hiera(): Looking up redis::service_enable in JSON backend", > "Debug: hiera(): Looking up redis::service_ensure in JSON backend", > "Debug: hiera(): Looking up redis::service_group in JSON backend", > "Debug: hiera(): Looking up redis::service_hasrestart in JSON backend", > "Debug: hiera(): Looking up redis::service_hasstatus in JSON backend", > "Debug: hiera(): Looking up redis::service_manage in JSON backend", > "Debug: hiera(): Looking up redis::service_name in JSON backend", > "Debug: hiera(): Looking up redis::service_provider in JSON backend", > "Debug: hiera(): Looking up redis::service_user in JSON backend", > "Debug: hiera(): Looking up redis::set_max_intset_entries in JSON backend", > "Debug: hiera(): Looking up redis::slave_priority in JSON backend", > "Debug: hiera(): Looking up redis::slave_read_only in JSON backend", > "Debug: hiera(): Looking up redis::slave_serve_stale_data in JSON backend", > "Debug: hiera(): Looking up redis::slaveof in JSON backend", > "Debug: hiera(): Looking up redis::slowlog_log_slower_than in JSON backend", > "Debug: hiera(): Looking up redis::slowlog_max_len in JSON backend", > "Debug: hiera(): Looking up redis::stop_writes_on_bgsave_error in JSON backend", > "Debug: hiera(): Looking up redis::syslog_enabled in JSON backend", > "Debug: hiera(): Looking up redis::syslog_facility in JSON backend", > "Debug: hiera(): Looking up redis::tcp_backlog in JSON backend", > "Debug: hiera(): Looking up redis::tcp_keepalive in JSON backend", > "Debug: hiera(): Looking up redis::timeout in JSON backend", > "Debug: hiera(): Looking up redis::unixsocket in JSON backend", > "Debug: hiera(): Looking up redis::unixsocketperm in JSON backend", > "Debug: hiera(): Looking up redis::ulimit in JSON backend", > "Debug: hiera(): Looking up redis::workdir in JSON backend", > "Debug: hiera(): Looking up redis::workdir_mode in JSON backend", > "Debug: hiera(): Looking up redis::zset_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::zset_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::cluster_enabled in JSON backend", > "Debug: hiera(): Looking up redis::cluster_config_file in JSON backend", > "Debug: hiera(): Looking up redis::cluster_node_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/redis/manifests/preinstall.pp' in environment production", > "Debug: Automatically imported redis::preinstall from redis/preinstall into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/install.pp' in environment production", > "Debug: Automatically imported redis::install from redis/install into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/config.pp' in environment production", > "Debug: Automatically imported redis::config from redis/config into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/instance.pp' in environment production", > "Debug: Automatically imported redis::instance from redis/instance into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/ulimit.pp' in environment production", > "Debug: Automatically imported redis::ulimit from redis/ulimit into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/service.pp' in environment production", > "Debug: Automatically imported redis::service from redis/service into production", > "Debug: hiera(): Looking up redis_short_node_names in JSON backend", > "Debug: Scope(Redis::Instance[default]): Retrieving template redis/redis.conf.3.2.erb", > "Debug: template[/etc/puppet/modules/redis/templates/redis.conf.3.2.erb]: Bound template variables for /etc/puppet/modules/redis/templates/redis.conf.3.2.erb in 0.01 seconds", > "Debug: template[/etc/puppet/modules/redis/templates/redis.conf.3.2.erb]: Interpolated template /etc/puppet/modules/redis/templates/redis.conf.3.2.erb in 0.01 seconds", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[redis] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-redis-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[redis-bundle] with 'before'", > "Debug: Adding relationship from Class[Redis::Preinstall] to Class[Redis::Install] with 'before'", > "Debug: Adding relationship from Class[Redis::Install] to Class[Redis::Config] with 'before'", > "Debug: File[/etc/redis]: Adding default for owner", > "Debug: File[/etc/redis]: Adding default for group", > "Debug: File[/etc/systemd/system/redis.service.d/]: Adding default for mode", > "Debug: File[/etc/redis.conf.puppet]: Adding default for owner", > "Debug: File[/etc/redis.conf.puppet]: Adding default for group", > "Debug: File[/etc/redis.conf.puppet]: Adding default for mode", > "Info: Applying configuration version '1538484406'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[redis]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-redis-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[redis-bundle]", > "Debug: /Stage[main]/Redis::Preinstall/before: subscribes to Class[Redis::Install]", > "Debug: /Stage[main]/Redis::Install/before: subscribes to Class[Redis::Config]", > "Debug: /Stage[main]/Redis::Ulimit/Augeas[Systemd redis ulimit]/notify: subscribes to Exec[systemd-reload-redis]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/require: subscribes to Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]/subscribe: subscribes to File[/etc/redis.conf.puppet]", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]: Adding autorequire relationship with File[/etc/systemd/system/redis.service.d/]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Redis_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Redis_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Preinstall]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Preinstall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Redis::Install/Package[redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis::Install/Package[redis]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Config]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Debug: /Stage[main]/Redis::Config/File[/etc/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Debug: /Stage[main]/Redis::Config/File[/var/log/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Debug: /Stage[main]/Redis::Config/File[/var/lib/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Debug: Redis::Instance[default]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Redis::Instance[default]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Ulimit]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Ulimit]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]: The container Class[Redis::Ulimit] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]: The container Class[Redis::Ulimit] will propagate my refresh event", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): sending command 'defnode' with params [\"nofile\", \"/etc/systemd/system/redis.service.d/limits.conf/Service/LimitNOFILE\", \"\"]", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): sending command 'set' with params [\"$nofile/value\", \"10240\"]", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Skipping because no files were changed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Closed the augeas connection", > "Info: Class[Redis::Ulimit]: Unscheduling all events on Class[Redis::Ulimit]", > "Debug: Class[Redis::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Service]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Redis/Exec[systemd-reload-redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis/Exec[systemd-reload-redis]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[redis-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[redis-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-qe3ogh returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-qe3ogh property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}be99a9a28fde3a84874841df38523dcd'", > "Info: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]: Scheduling refresh of Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]: The container Redis::Instance[default] will propagate my refresh event", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Resource is being skipped, unscheduling all events", > "Info: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Unscheduling all events on Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]", > "Info: Redis::Instance[default]: Unscheduling all events on Redis::Instance[default]", > "Info: Class[Redis::Config]: Unscheduling all events on Class[Redis::Config]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-flmdez returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-flmdez property show | grep redis-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep redis-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-4qyccb returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-4qyccb property set --node controller-0 redis-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-4qyccb diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-4qyccb.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 redis-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/Pcmk_property[property-controller-0-redis-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/Pcmk_property[property-controller-0-redis-role]: The container Pacemaker::Property[redis-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[redis-role-controller-0]: Unscheduling all events on Pacemaker::Property[redis-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[redis-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Bundle[redis-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1pwi9l3 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1pwi9l3 constraint list | grep location-redis-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-j8yfmu returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-j8yfmu resource show redis-bundle > /dev/null 2>&1", > "Debug: Exists: bundle redis-bundle exists 1 location exists 1 deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1l0xn2 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1l0xn2 resource bundle create redis-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest replicas=1 masters=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=redis-cfg-files source-dir=/var/lib/kolla/config_files/redis.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=redis-cfg-data-redis source-dir=/var/lib/config-data/puppet-generated/redis/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=redis-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=redis-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=redis-lib source-dir=/var/lib/redis target-dir=/var/lib/redis options=rw storage-map id=redis-log source-dir=/var/log/containers/redis target-dir=/var/log/redis options=rw storage-map id=redis-run source-dir=/var/run/redis target-dir=/var/run/redis options=rw storage-map id=redis-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=redis-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=redis-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=redis-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=redis-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3124 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1l0xn2 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1l0xn2.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: location_rule_create: constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1vqe5az returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1vqe5az constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1vqe5az diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1vqe5az.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1m6fhkm returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1m6fhkm resource enable redis-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1m6fhkm diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1m6fhkm.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Bundle[redis-bundle]/Pcmk_bundle[redis-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Bundle[redis-bundle]/Pcmk_bundle[redis-bundle]: The container Pacemaker::Resource::Bundle[redis-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[redis-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: Pacemaker::Resource::Ocf[redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ocf[redis]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ut6g4k returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ut6g4k constraint list | grep location-redis-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1l1l4d2 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1l1l4d2 resource show redis > /dev/null 2>&1", > "Debug: Exists: resource redis exists 1 location exists 0 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-gto02v returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-gto02v resource create redis ocf:heartbeat:redis wait_last_known_master=true meta notify=true ordered=true interleave=true container-attribute-target=host op start timeout=200s stop timeout=200s bundle redis-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-gto02v diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-gto02v.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/Pcmk_resource[redis]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/Pcmk_resource[redis]: The container Pacemaker::Resource::Ocf[redis] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[redis]: Unscheduling all events on Pacemaker::Resource::Ocf[redis]", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Finishing transaction 32798980", > "Notice: Applied catalog in 41.58 seconds", > " Total: 13", > " Success: 13", > " Changed: 13", > " Out of sync: 13", > " Total: 42", > " Augeas: 0.01", > " File: 0.02", > " Pcmk property: 10.11", > " Pcmk resource: 11.37", > " Last run: 1538484449", > " Pcmk bundle: 19.89", > " Total: 43.17", > " Config: 1538484406", > "Debug: Finishing transaction 43162840", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle'", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/haproxy_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::haproxy_bundle from tripleo/profile/pacemaker/haproxy_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::haproxy_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::ca_bundle in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::crl_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::internal_certs_directory in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::internal_keys_directory in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::deployed_ssl_cert_path in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up haproxy_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ca_bundle in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::crl_file in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::service_certificate in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/haproxy.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::haproxy from tripleo/profile/base/haproxy into production", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::certificates_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::manage_firewall in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::step in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::manage_firewall in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy from tripleo/haproxy into production", > "Debug: hiera(): Looking up tripleo::haproxy::controller_virtual_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::public_virtual_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_service_manage in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_global_maxconn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_default_maxconn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_default_timeout in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_listen_bind_param in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_member_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_log_address in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::activate_httplog in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_globals_override in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_defaults_override in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_daemon in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_socket_access_level in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_user in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_password in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::controller_hosts in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::controller_hosts_names in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::use_internal_certificates in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ssl_cipher_suite in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ssl_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_certificate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::neutron in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::cinder in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::congress in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::manila in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::sahara in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::tacker in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::trove in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_metadata in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::aodh in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::panko in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::barbican in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mistral in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_inspector in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::octavia in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::designate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::kubernetes_master in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_clustercheck in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_max_conn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_member_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::openshift_master in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::rabbitmq in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::etcd in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::docker_registry in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::redis in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::redis_password in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::midonet_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ceph_rgw in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::opendaylight in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs_manage_lb in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_ws in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ui in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::aodh_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::barbican_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ceph_rgw_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::cinder_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::congress_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::designate_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::docker_registry_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_inspector_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::kubernetes_master_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_sticky_sessions in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_session_cookie in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::manila_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mistral_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::neutron_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::octavia_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::opendaylight_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::openshift_master_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::panko_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_metadata_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::etcd_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::sahara_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::tacker_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::trove_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::service_ports in JSON backend", > "Debug: hiera(): Looking up controller_node_ips in JSON backend", > "Debug: hiera(): Looking up controller_node_names in JSON backend", > "Debug: hiera(): Looking up nova_vnc_proxy_enabled in JSON backend", > "Debug: hiera(): Looking up swift_proxy_enabled in JSON backend", > "Debug: hiera(): Looking up heat_api_enabled in JSON backend", > "Debug: hiera(): Looking up heat_api_cfn_enabled in JSON backend", > "Debug: hiera(): Looking up horizon_enabled in JSON backend", > "Debug: hiera(): Looking up mysql_enabled in JSON backend", > "Debug: hiera(): Looking up kubernetes_master_enabled in JSON backend", > "Debug: hiera(): Looking up openshift_master_enabled in JSON backend", > "Debug: hiera(): Looking up etcd_enabled in JSON backend", > "Debug: hiera(): Looking up enable_docker_registry in JSON backend", > "Debug: hiera(): Looking up redis_enabled in JSON backend", > "Debug: hiera(): Looking up ceph_rgw_enabled in JSON backend", > "Debug: hiera(): Looking up opendaylight_api_enabled in JSON backend", > "Debug: hiera(): Looking up ovn_dbs_enabled in JSON backend", > "Debug: hiera(): Looking up tripleo_ui_enabled in JSON backend", > "Debug: hiera(): Looking up enable_ui in JSON backend", > "Debug: hiera(): Looking up aodh_api_network in JSON backend", > "Debug: hiera(): Looking up barbican_api_network in JSON backend", > "Debug: hiera(): Looking up ceph_rgw_network in JSON backend", > "Debug: hiera(): Looking up cinder_api_network in JSON backend", > "Debug: hiera(): Looking up congress_api_network in JSON backend", > "Debug: hiera(): Looking up designate_api_network in JSON backend", > "Debug: hiera(): Looking up docker_registry_network in JSON backend", > "Debug: hiera(): Looking up glance_api_network in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_network in JSON backend", > "Debug: hiera(): Looking up heat_api_network in JSON backend", > "Debug: hiera(): Looking up heat_api_cfn_network in JSON backend", > "Debug: hiera(): Looking up horizon_network in JSON backend", > "Debug: hiera(): Looking up ironic_inspector_network in JSON backend", > "Debug: hiera(): Looking up ironic_api_network in JSON backend", > "Debug: hiera(): Looking up kubernetes_master_network in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_network in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_network in JSON backend", > "Debug: hiera(): Looking up keystone_sticky_sessions in JSON backend", > "Debug: hiera(): Looking up keystone_session_cookie, in JSON backend", > "Debug: hiera(): Looking up manila_api_network in JSON backend", > "Debug: hiera(): Looking up mistral_api_network in JSON backend", > "Debug: hiera(): Looking up neutron_api_network in JSON backend", > "Debug: hiera(): Looking up nova_api_network in JSON backend", > "Debug: hiera(): Looking up nova_vnc_proxy_network in JSON backend", > "Debug: hiera(): Looking up nova_placement_network in JSON backend", > "Debug: hiera(): Looking up octavia_api_network in JSON backend", > "Debug: hiera(): Looking up opendaylight_api_network in JSON backend", > "Debug: hiera(): Looking up openshift_master_network in JSON backend", > "Debug: hiera(): Looking up panko_api_network in JSON backend", > "Debug: hiera(): Looking up ovn_dbs_network in JSON backend", > "Debug: hiera(): Looking up ec2_api_network in JSON backend", > "Debug: hiera(): Looking up etcd_network in JSON backend", > "Debug: hiera(): Looking up sahara_api_network in JSON backend", > "Debug: hiera(): Looking up swift_proxy_network in JSON backend", > "Debug: hiera(): Looking up tacker_api_network in JSON backend", > "Debug: hiera(): Looking up trove_api_network in JSON backend", > "Debug: hiera(): Looking up zaqar_api_network in JSON backend", > "Debug: hiera(): Looking up mysql_vip in JSON backend", > "Debug: hiera(): Looking up rabbitmq_vip in JSON backend", > "Debug: hiera(): Looking up redis_vip in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/init.pp' in environment production", > "Debug: Automatically imported haproxy from haproxy into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/params.pp' in environment production", > "Debug: Automatically imported haproxy::params from haproxy/params into production", > "Debug: hiera(): Looking up haproxy::package_ensure in JSON backend", > "Debug: hiera(): Looking up haproxy::package_name in JSON backend", > "Debug: hiera(): Looking up haproxy::service_ensure in JSON backend", > "Debug: hiera(): Looking up haproxy::service_options in JSON backend", > "Debug: hiera(): Looking up haproxy::sysconfig_options in JSON backend", > "Debug: hiera(): Looking up haproxy::merge_options in JSON backend", > "Debug: hiera(): Looking up haproxy::restart_command in JSON backend", > "Debug: hiera(): Looking up haproxy::custom_fragment in JSON backend", > "Debug: hiera(): Looking up haproxy::config_dir in JSON backend", > "Debug: hiera(): Looking up haproxy::config_file in JSON backend", > "Debug: hiera(): Looking up haproxy::manage_config_dir in JSON backend", > "Debug: hiera(): Looking up haproxy::config_validate_cmd in JSON backend", > "Debug: hiera(): Looking up haproxy::manage_service in JSON backend", > "Debug: hiera(): Looking up haproxy::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/instance.pp' in environment production", > "Debug: Automatically imported haproxy::instance from haproxy/instance into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/endpoint.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::endpoint from tripleo/haproxy/endpoint into production", > "Debug: hiera(): Looking up enabled_services in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/service_endpoints.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::service_endpoints from tripleo/haproxy/service_endpoints into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/stats.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::stats from tripleo/haproxy/stats into production", > "Debug: hiera(): Looking up tripleo::haproxy::stats::certificate in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/listen.pp' in environment production", > "Debug: Automatically imported haproxy::listen from haproxy/listen into production", > "Debug: hiera(): Looking up keystone_admin_api_vip in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_node_ips in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_node_names in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_vip in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_node_ips in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_node_names in JSON backend", > "Debug: hiera(): Looking up neutron_api_vip in JSON backend", > "Debug: hiera(): Looking up neutron_api_node_ips in JSON backend", > "Debug: hiera(): Looking up neutron_api_node_names in JSON backend", > "Debug: hiera(): Looking up cinder_api_vip in JSON backend", > "Debug: hiera(): Looking up cinder_api_node_ips in JSON backend", > "Debug: hiera(): Looking up cinder_api_node_names in JSON backend", > "Debug: hiera(): Looking up sahara_api_vip in JSON backend", > "Debug: hiera(): Looking up sahara_api_node_ips in JSON backend", > "Debug: hiera(): Looking up sahara_api_node_names in JSON backend", > "Debug: hiera(): Looking up glance_api_vip in JSON backend", > "Debug: hiera(): Looking up glance_api_node_ips in JSON backend", > "Debug: hiera(): Looking up glance_api_node_names in JSON backend", > "Debug: hiera(): Looking up nova_api_vip in JSON backend", > "Debug: hiera(): Looking up nova_api_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_api_node_names in JSON backend", > "Debug: hiera(): Looking up nova_placement_vip in JSON backend", > "Debug: hiera(): Looking up nova_placement_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_placement_node_names in JSON backend", > "Debug: hiera(): Looking up nova_metadata_vip in JSON backend", > "Debug: hiera(): Looking up nova_metadata_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_metadata_node_names in JSON backend", > "Debug: hiera(): Looking up aodh_api_vip in JSON backend", > "Debug: hiera(): Looking up aodh_api_node_ips in JSON backend", > "Debug: hiera(): Looking up aodh_api_node_names in JSON backend", > "Debug: hiera(): Looking up panko_api_vip in JSON backend", > "Debug: hiera(): Looking up panko_api_node_ips in JSON backend", > "Debug: hiera(): Looking up panko_api_node_names in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_vip in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_node_ips in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_node_names in JSON backend", > "Debug: hiera(): Looking up swift_proxy_vip in JSON backend", > "Debug: hiera(): Looking up swift_proxy_node_ips in JSON backend", > "Debug: hiera(): Looking up swift_proxy_node_names in JSON backend", > "Debug: hiera(): Looking up heat_api_vip in JSON backend", > "Debug: hiera(): Looking up heat_api_node_ips in JSON backend", > "Debug: hiera(): Looking up heat_api_node_names in JSON backend", > "Debug: hiera(): Looking up horizon_vip in JSON backend", > "Debug: hiera(): Looking up horizon_node_ips in JSON backend", > "Debug: hiera(): Looking up horizon_node_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/horizon_endpoint.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::horizon_endpoint from tripleo/haproxy/horizon_endpoint into production", > "Debug: hiera(): Looking up tripleo::haproxy::horizon_endpoint::public_certificate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon::options in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/balancermember.pp' in environment production", > "Debug: Automatically imported haproxy::balancermember from haproxy/balancermember into production", > "Debug: hiera(): Looking up mysql_node_ips in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall.pp' in environment production", > "Debug: Automatically imported tripleo::firewall from tripleo/firewall into production", > "Debug: hiera(): Looking up tripleo::firewall::firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_pre_extras in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_post_extras in JSON backend", > "Debug: Resource class[tripleo::firewall::pre] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::pre] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/pre.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::pre from tripleo/firewall/pre into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/init.pp' in environment production", > "Debug: Automatically imported firewall from firewall into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/params.pp' in environment production", > "Debug: Automatically imported firewall::params from firewall/params into production", > "Debug: hiera(): Looking up firewall::ensure in JSON backend", > "Debug: hiera(): Looking up firewall::ensure_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::pkg_ensure in JSON backend", > "Debug: hiera(): Looking up firewall::service_name in JSON backend", > "Debug: hiera(): Looking up firewall::service_name_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::package_name in JSON backend", > "Debug: hiera(): Looking up firewall::ebtables_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux.pp' in environment production", > "Debug: Automatically imported firewall::linux from firewall/linux into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux/redhat.pp' in environment production", > "Debug: Automatically imported firewall::linux::redhat from firewall/linux/redhat into production", > "Debug: hiera(): Looking up firewall::linux::redhat::package_ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/rule.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::rule from tripleo/firewall/rule into production", > "Debug: Resource class[tripleo::firewall::post] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::post] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/post.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::post from tripleo/firewall/post into production", > "Debug: hiera(): Looking up tripleo::firewall::post::debug in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::post::logging_settings in JSON backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Debug: hiera(): Looking up service_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/service_rules.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::service_rules from tripleo/firewall/service_rules into production", > "Debug: hiera(): Looking up redis_node_ips in JSON backend", > "Debug: hiera(): Looking up redis_node_names in JSON backend", > "Debug: hiera(): Looking up midonet_cluster_vip in JSON backend", > "Debug: hiera(): Looking up haproxy_short_node_names in JSON backend", > "Debug: hiera(): Looking up controller_virtual_ip in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp' in environment production", > "Debug: Automatically imported tripleo::pacemaker::haproxy_with_vip from tripleo/pacemaker/haproxy_with_vip into production", > "Debug: hiera(): Looking up public_virtual_ip in JSON backend", > "Debug: hiera(): Looking up network_virtual_ips in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/config.pp' in environment production", > "Debug: Automatically imported haproxy::config from haproxy/config into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/install.pp' in environment production", > "Debug: Automatically imported haproxy::install from haproxy/install into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/service.pp' in environment production", > "Debug: Automatically imported haproxy::service from haproxy/service into production", > "Debug: hiera(): Looking up tripleo.aodh_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.container_image_prepare.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.container_image_prepare.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::container_image_prepare::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::container_image_prepare::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.xinetd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.xinetd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::xinetd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::xinetd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_client.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_client.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_client::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_client::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_compute.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_compute.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_compute::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_compute::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_compute.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_compute.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_compute::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_compute::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt_guests.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt_guests.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt_guests::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt_guests::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_migration_target.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_migration_target.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_migration_target::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_migration_target::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_osd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_osd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_osd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_osd::haproxy_userlists in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/backend.pp' in environment production", > "Debug: Automatically imported haproxy::backend from haproxy/backend into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/globals.pp' in environment production", > "Debug: Automatically imported haproxy::globals from haproxy/globals into production", > "Debug: hiera(): Looking up haproxy::globals::sort_options_alphabetic in JSON backend", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.08 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_mode.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_mode.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_mode.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_mode.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_options.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_options.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_options.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_options.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.09 seconds", > "Debug: importing '/etc/puppet/modules/concat/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/concat/manifests/fragment.pp' in environment production", > "Debug: Automatically imported concat::fragment from concat/fragment into production", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::neutron::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::cinder::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for member_options", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::sahara::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::aodh::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::panko::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn::options in JSON backend", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.03 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.06 seconds", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.10 seconds", > "Debug: Scope(Haproxy::Balancermember[horizon_172.17.1.20_controller-0.internalapi.localdomain]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Balancermember[mysql-backup]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up tripleo.aodh_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.container_image_prepare.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::container_image_prepare::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.firewall_rules in JSON backend", > "Debug: hiera(): Looking up memcached_network in JSON backend", > "Debug: hiera(): Looking up internal_api_subnet in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up snmpd_network in JSON backend", > "Debug: hiera(): Looking up ctrlplane_subnet in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.xinetd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::xinetd::firewall_rules in JSON backend", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.01 seconds", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.02 seconds", > "Debug: Scope(Haproxy::Balancermember[redis]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up haproxy_docker in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/ip.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::ip from pacemaker/resource/ip into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/constraint/order.pp' in environment production", > "Debug: Automatically imported pacemaker::constraint::order from pacemaker/constraint/order into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/constraint/colocation.pp' in environment production", > "Debug: Automatically imported pacemaker::constraint::colocation from pacemaker/constraint/colocation into production", > "Debug: Scope(Haproxy::Config[haproxy]): Retrieving template haproxy/haproxy-base.cfg.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.01 seconds", > "Debug: Scope(Haproxy::Balancermember[keystone_admin]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[keystone_public]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[neutron]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[cinder]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.05 seconds", > "Debug: Scope(Haproxy::Balancermember[sahara]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[glance_api]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_osapi]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_placement]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_metadata]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_novncproxy]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[aodh]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[panko]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[gnocchi]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[swift_proxy_server]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[heat_api]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[heat_cfn]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up pacemaker::resource::ip::deep_compare in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::ip::update_settle_secs in JSON backend", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-192.168.24.16-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-192.168.24.16-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-10.0.0.106-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-10.0.0.106-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.1.26-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.1.26-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.1.28-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.1.28-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.3.10-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.3.10-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.4.18-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.4.18-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-192.168.24.16] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-10.0.0.106] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.1.26] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.1.28] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.3.10] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.4.18] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-haproxy-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 mysql_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 mysql_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 redis_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 redis_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_admin_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_admin_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_public_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_public_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_public_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_public_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 neutron_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 neutron_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 neutron_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 neutron_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 cinder_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 cinder_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 cinder_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 cinder_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 sahara_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 sahara_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 sahara_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 sahara_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 glance_api_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 glance_api_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 glance_api_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 glance_api_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_osapi_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_osapi_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_osapi_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_osapi_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_placement_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_placement_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_placement_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_placement_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_metadata_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_metadata_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_novncproxy_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_novncproxy_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_novncproxy_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_novncproxy_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 aodh_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 aodh_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 aodh_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 aodh_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 panko_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 panko_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 panko_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 panko_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 gnocchi_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 gnocchi_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 gnocchi_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 gnocchi_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 swift_proxy_server_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 swift_proxy_server_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 swift_proxy_server_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 swift_proxy_server_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_api_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_api_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_api_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_api_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_cfn_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_cfn_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_cfn_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_cfn_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[128 aodh-api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[128 aodh-api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[113 ceph_mgr ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[113 ceph_mgr ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[110 ceph_mon ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[110 ceph_mon ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[119 cinder ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[119 cinder ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[120 iscsi initiator ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[120 iscsi initiator ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[112 glance_api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[112 glance_api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[129 gnocchi-api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[129 gnocchi-api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[140 gnocchi-statsd ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[140 gnocchi-statsd ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[107 haproxy stats ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[107 haproxy stats ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[125 heat_api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[125 heat_api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[125 heat_cfn ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[125 heat_cfn ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[127 horizon ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[127 horizon ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[111 keystone ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[111 keystone ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[121 memcached ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[104 mysql galera-bundle ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[104 mysql galera-bundle ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[114 neutron api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[114 neutron api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[115 neutron dhcp input ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[115 neutron dhcp input ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[116 neutron dhcp output ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[116 neutron dhcp output ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[106 neutron_l3 vrrp ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[106 neutron_l3 vrrp ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[118 neutron vxlan networks ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[118 neutron vxlan networks ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[136 neutron gre networks ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[136 neutron gre networks ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[113 nova_api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[113 nova_api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[138 nova_placement ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[138 nova_placement ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[137 nova_vnc_proxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[137 nova_vnc_proxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[105 ntp ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[105 ntp ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[130 pacemaker tcp ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[130 pacemaker tcp ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[131 pacemaker udp ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[131 pacemaker udp ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[140 panko-api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[140 panko-api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[109 rabbitmq-bundle ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[109 rabbitmq-bundle ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[108 redis-bundle ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[108 redis-bundle ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[132 sahara ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[132 sahara ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[122 swift proxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[122 swift proxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[123 swift storage ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[123 swift storage ipv6] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pcsd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[corosync] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[firewalld] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[iptables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[ip6tables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Haproxy::Listen[haproxy.stats] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[horizon] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[mysql] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[redis] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[keystone_admin] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[keystone_public] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[neutron] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[cinder] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[sahara] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[glance_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_osapi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_placement] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_metadata] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_novncproxy] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[aodh] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[panko] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[gnocchi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[swift_proxy_server] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[heat_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[heat_cfn] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[horizon_172.17.1.20_controller-0.internalapi.localdomain] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[mysql-backup] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[redis] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[keystone_admin] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[keystone_public] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[neutron] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[cinder] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[sahara] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[glance_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_osapi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_placement] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_metadata] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_novncproxy] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[aodh] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[panko] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[gnocchi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[swift_proxy_server] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[heat_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[heat_cfn] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Anchor[haproxy::haproxy::begin] to Haproxy::Install[haproxy] with 'before'", > "Debug: Adding relationship from Haproxy::Install[haproxy] to Haproxy::Config[haproxy] with 'before'", > "Debug: Adding relationship from Haproxy::Config[haproxy] to Haproxy::Service[haproxy] with 'notify'", > "Debug: Adding relationship from Haproxy::Service[haproxy] to Anchor[haproxy::haproxy::end] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[control_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[control_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[control_vip-then-haproxy] to Pacemaker::Constraint::Colocation[control_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[public_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[public_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[public_vip-then-haproxy] to Pacemaker::Constraint::Colocation[public_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[redis_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[redis_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[redis_vip-then-haproxy] to Pacemaker::Constraint::Colocation[redis_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[internal_api_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] to Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[storage_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[storage_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[storage_vip-then-haproxy] to Pacemaker::Constraint::Colocation[storage_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[storage_mgmt_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] to Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy] with 'before'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 5.58 seconds", > "Debug: /Firewall[000 accept related established rules ipv4]: [validate]", > "Debug: /Firewall[000 accept related established rules ipv6]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv4]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv6]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: [validate]", > "Debug: /Firewall[003 accept ssh ipv4]: [validate]", > "Debug: /Firewall[003 accept ssh ipv6]: [validate]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: [validate]", > "Debug: /Firewall[998 log all ipv4]: [validate]", > "Debug: /Firewall[998 log all ipv6]: [validate]", > "Debug: /Firewall[999 drop all ipv4]: [validate]", > "Debug: /Firewall[999 drop all ipv6]: [validate]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 redis_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 redis_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 panko_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 panko_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[128 aodh-api ipv4]: [validate]", > "Debug: /Firewall[128 aodh-api ipv6]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv4]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv6]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv4]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv6]: [validate]", > "Debug: /Firewall[119 cinder ipv4]: [validate]", > "Debug: /Firewall[119 cinder ipv6]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv4]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv6]: [validate]", > "Debug: /Firewall[112 glance_api ipv4]: [validate]", > "Debug: /Firewall[112 glance_api ipv6]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv4]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv6]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv4]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv6]: [validate]", > "Debug: /Firewall[125 heat_api ipv4]: [validate]", > "Debug: /Firewall[125 heat_api ipv6]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv4]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv6]: [validate]", > "Debug: /Firewall[127 horizon ipv4]: [validate]", > "Debug: /Firewall[127 horizon ipv6]: [validate]", > "Debug: /Firewall[111 keystone ipv4]: [validate]", > "Debug: /Firewall[111 keystone ipv6]: [validate]", > "Debug: /Firewall[121 memcached ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: [validate]", > "Debug: /Firewall[114 neutron api ipv4]: [validate]", > "Debug: /Firewall[114 neutron api ipv6]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv4]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv6]: [validate]", > "Debug: /Firewall[113 nova_api ipv4]: [validate]", > "Debug: /Firewall[113 nova_api ipv6]: [validate]", > "Debug: /Firewall[138 nova_placement ipv4]: [validate]", > "Debug: /Firewall[138 nova_placement ipv6]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: [validate]", > "Debug: /Firewall[105 ntp ipv4]: [validate]", > "Debug: /Firewall[105 ntp ipv6]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv4]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv6]: [validate]", > "Debug: /Firewall[140 panko-api ipv4]: [validate]", > "Debug: /Firewall[140 panko-api ipv6]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv4]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv6]: [validate]", > "Debug: /Firewall[132 sahara ipv4]: [validate]", > "Debug: /Firewall[132 sahara ipv6]: [validate]", > "Debug: /Firewall[122 swift proxy ipv4]: [validate]", > "Debug: /Firewall[122 swift proxy ipv6]: [validate]", > "Debug: /Firewall[123 swift storage ipv4]: [validate]", > "Debug: /Firewall[123 swift storage ipv6]: [validate]", > "Info: Applying configuration version '1538484456'", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-192.168.24.16-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-192.168.24.16-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-10.0.0.106-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-10.0.0.106-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.1.26-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.1.26-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.1.28-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.1.28-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.3.10-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.3.10-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.4.18-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.4.18-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-192.168.24.16]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-10.0.0.106]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.1.26]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.1.28]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.3.10]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.4.18]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-haproxy-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Haproxy::Stats/Haproxy::Listen[haproxy.stats]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy::Horizon_endpoint/Haproxy::Listen[horizon]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy::Horizon_endpoint/Haproxy::Balancermember[horizon_172.17.1.20_controller-0.internalapi.localdomain]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Listen[mysql]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Balancermember[mysql-backup]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 mysql_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 mysql_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 redis_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 redis_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_admin_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_admin_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_public_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_public_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_public_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_public_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 neutron_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 neutron_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 neutron_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 neutron_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 cinder_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 cinder_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 cinder_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 cinder_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 sahara_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 sahara_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 sahara_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 sahara_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 glance_api_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 glance_api_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 glance_api_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 glance_api_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_osapi_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_osapi_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_osapi_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_osapi_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_placement_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_placement_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_placement_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_placement_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_metadata_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_metadata_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_novncproxy_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_novncproxy_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_novncproxy_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_novncproxy_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 aodh_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 aodh_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 aodh_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 aodh_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 panko_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 panko_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 panko_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 panko_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 gnocchi_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 gnocchi_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 gnocchi_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 gnocchi_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 swift_proxy_server_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 swift_proxy_server_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 swift_proxy_server_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 swift_proxy_server_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_api_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_api_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_api_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_api_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_cfn_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_cfn_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_cfn_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_cfn_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[128 aodh-api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[128 aodh-api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[113 ceph_mgr ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[113 ceph_mgr ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[110 ceph_mon ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[110 ceph_mon ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[119 cinder ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[119 cinder ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[120 iscsi initiator ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[120 iscsi initiator ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[112 glance_api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[112 glance_api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[129 gnocchi-api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[129 gnocchi-api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[140 gnocchi-statsd ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[140 gnocchi-statsd ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[107 haproxy stats ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[107 haproxy stats ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[125 heat_api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[125 heat_api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[125 heat_cfn ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[125 heat_cfn ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[127 horizon ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[127 horizon ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[111 keystone ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[111 keystone ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[121 memcached ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[104 mysql galera-bundle ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[104 mysql galera-bundle ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[114 neutron api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[114 neutron api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[115 neutron dhcp input ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[115 neutron dhcp input ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[116 neutron dhcp output ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[116 neutron dhcp output ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[106 neutron_l3 vrrp ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[106 neutron_l3 vrrp ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[118 neutron vxlan networks ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[118 neutron vxlan networks ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[136 neutron gre networks ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[136 neutron gre networks ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[113 nova_api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[113 nova_api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[138 nova_placement ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[138 nova_placement ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[137 nova_vnc_proxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[137 nova_vnc_proxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[105 ntp ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[105 ntp ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[130 pacemaker tcp ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[130 pacemaker tcp ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[131 pacemaker udp ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[131 pacemaker udp ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[140 panko-api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[140 panko-api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[109 rabbitmq-bundle ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[109 rabbitmq-bundle ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[108 redis-bundle ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[108 redis-bundle ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[132 sahara ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[132 sahara ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[122 swift proxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[122 swift proxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[123 swift storage ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[123 swift storage ipv6]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/require: subscribes to Package[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/require: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/subscribe: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[ip6tables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Listen[redis]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Balancermember[redis]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]/subscribe: subscribes to Class[Haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[control_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[public_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[redis_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[storage_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/notify: subscribes to Haproxy::Service[haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/before: subscribes to Haproxy::Config[haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Service[haproxy]/before: subscribes to Anchor[haproxy::haproxy::end]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]/before: subscribes to Haproxy::Install[haproxy]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Haproxy::Listen[keystone_admin]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Haproxy::Balancermember[keystone_admin]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Haproxy::Listen[keystone_public]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Haproxy::Balancermember[keystone_public]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Haproxy::Listen[neutron]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Haproxy::Balancermember[neutron]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Haproxy::Listen[cinder]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Haproxy::Balancermember[cinder]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Haproxy::Listen[sahara]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Haproxy::Balancermember[sahara]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Haproxy::Listen[glance_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Haproxy::Balancermember[glance_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Haproxy::Listen[nova_osapi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Haproxy::Balancermember[nova_osapi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Haproxy::Listen[nova_placement]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Haproxy::Balancermember[nova_placement]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Haproxy::Listen[nova_metadata]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Haproxy::Balancermember[nova_metadata]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Haproxy::Listen[nova_novncproxy]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Haproxy::Balancermember[nova_novncproxy]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Haproxy::Listen[aodh]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Haproxy::Balancermember[aodh]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Haproxy::Listen[panko]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Haproxy::Balancermember[panko]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Haproxy::Listen[gnocchi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Haproxy::Balancermember[gnocchi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Haproxy::Listen[swift_proxy_server]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Haproxy::Balancermember[swift_proxy_server]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Haproxy::Listen[heat_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Haproxy::Balancermember[heat_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Haproxy::Listen[heat_cfn]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Haproxy::Balancermember[heat_cfn]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[control_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[public_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/Concat_file[/etc/haproxy/haproxy.cfg]/before: subscribes to File[/etc/haproxy/haproxy.cfg]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/Concat_file[/etc/haproxy/haproxy.cfg]: Skipping automatic relationship with File[/etc/haproxy/haproxy.cfg]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Adding autorequire relationship with File[/etc/haproxy]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Haproxy_bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Haproxy_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Instance[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Instance[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_evaluator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_evaluator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_listener]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_listener]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_notifier]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_notifier]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ca_certs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ca_certs]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_central]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_central]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_notification]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_notification]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[certmonger_user]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[certmonger_user]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_backup]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_volume]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_volume]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[clustercheck]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[clustercheck]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[container_image_prepare]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[container_image_prepare]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[docker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[docker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_registry_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_registry_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_metricd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_metricd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cloudwatch_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cloudwatch_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[iscsid]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[iscsid]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[kernel]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[kernel]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mongodb_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mongodb_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_plugin_ml2]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_plugin_ml2]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_dhcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_dhcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_l3]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_l3]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_ovs_agent]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_ovs_agent]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_conductor]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_conductor]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_consoleauth]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_consoleauth]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[logrotate_crond]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[logrotate_crond]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[panko_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[panko_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_rpc]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_rpc]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_notify]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_notify]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sshd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sshd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_ringbuilder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_ringbuilder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_storage]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[timezone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[timezone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_firewall]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_packages]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_packages]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tuned]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tuned]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[xinetd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[xinetd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_compute]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_compute]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_compute]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_compute]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt_guests]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt_guests]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_migration_target]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_migration_target]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_osd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_osd]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy::Stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy::Stats]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[haproxy.stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[haproxy.stats]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy::Horizon_endpoint]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy::Horizon_endpoint]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[horizon_172.17.1.20_controller-0.internalapi.localdomain]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[horizon_172.17.1.20_controller-0.internalapi.localdomain]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[mysql-backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[mysql-backup]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Firewall::Pre]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall::Pre]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Linux]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Linux]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux/Package[iptables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux/Package[iptables]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Linux::Redhat]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Linux::Redhat]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_evaluator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_evaluator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_listener]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_listener]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_notifier]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_notifier]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ca_certs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ca_certs]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_central]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_central]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_notification]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_notification]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[certmonger_user]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[certmonger_user]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_backup]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[clustercheck]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[clustercheck]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[container_image_prepare]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[container_image_prepare]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[docker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[docker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[glance_registry_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[glance_registry_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_metricd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_metricd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cloudwatch_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cloudwatch_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[iscsid]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[iscsid]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[kernel]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[kernel]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mongodb_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mongodb_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mysql_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_plugin_ml2]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_plugin_ml2]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_conductor]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_conductor]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_consoleauth]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_consoleauth]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[logrotate_crond]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[logrotate_crond]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_rpc]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_rpc]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_notify]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_notify]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sahara_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sahara_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sahara_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sahara_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sshd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sshd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_ringbuilder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_ringbuilder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[timezone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[timezone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tripleo_firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tripleo_firewall]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tripleo_packages]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tripleo_packages]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tuned]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tuned]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[xinetd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[xinetd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 mysql_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 mysql_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[redis]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 redis_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 redis_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[haproxy-role-controller-0]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[haproxy-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-nqwhm3 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-nqwhm3 property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Install[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Install[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy::Globals]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy::Globals]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-haproxy.stats_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-haproxy.stats_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[panko]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[panko]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-horizon_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-horizon_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-horizon_balancermember_horizon_172.17.1.20_controller-0.internalapi.localdomain]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-horizon_balancermember_horizon_172.17.1.20_controller-0.internalapi.localdomain]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-mysql_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-mysql_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-mysql_balancermember_mysql-backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-mysql_balancermember_mysql-backup]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching iptables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIptables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIptables: [instances]", > "Debug: Executing: '/usr/sbin/iptables-save'", > "Debug: Prefetching ip6tables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [instances]", > "Debug: Executing: '/usr/sbin/ip6tables-save'", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[119 cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[119 cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[127 horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[127 horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[111 keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[111 keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[121 memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[121 memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[105 ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[105 ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[132 sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[132 sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[124 snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[124 snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): Inserting rule 100 mysql_haproxy ipv4", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 3306 -m state --state NEW -j ACCEPT -m comment --comment 100 mysql_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/iptables.init save'", > "Debug: /Firewall[100 mysql_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 mysql_haproxy] will propagate my refresh event", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): Inserting rule 100 mysql_haproxy ipv6", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 3306 -m state --state NEW -j ACCEPT -m comment --comment 100 mysql_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/ip6tables.init save'", > "Debug: /Firewall[100 mysql_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 mysql_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 mysql_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 mysql_haproxy]", > "Debug: Concat::Fragment[haproxy-redis_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-redis_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-redis_balancermember_redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-redis_balancermember_redis]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): Inserting rule 100 redis_haproxy ipv4", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 6379 -m state --state NEW -j ACCEPT -m comment --comment 100 redis_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 redis_haproxy] will propagate my refresh event", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): Inserting rule 100 redis_haproxy ipv6", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 6379 -m state --state NEW -j ACCEPT -m comment --comment 100 redis_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 redis_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 redis_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 redis_haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1q7ibua returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1q7ibua property show | grep haproxy-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep haproxy-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-10n3o5v returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-10n3o5v property set --node controller-0 haproxy-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-10n3o5v diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-10n3o5v.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 haproxy-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/Pcmk_property[property-controller-0-haproxy-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/Pcmk_property[property-controller-0-haproxy-role]: The container Pacemaker::Property[haproxy-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[haproxy-role-controller-0]: Unscheduling all events on Pacemaker::Property[haproxy-role-controller-0]", > "Debug: Pacemaker::Resource::Ip[control_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[control_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[public_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[public_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[redis_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[redis_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[internal_api_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[internal_api_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[storage_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[storage_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[storage_mgmt_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[storage_mgmt_vip]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/Package[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/Package[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Config[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Config[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Concat[/etc/haproxy/haproxy.cfg]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat[/etc/haproxy/haproxy.cfg]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-00-header]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-00-header]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-haproxy-base]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-haproxy-base]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_admin_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_admin_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_admin_balancermember_keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_admin_balancermember_keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): Inserting rule 100 keystone_admin_haproxy ipv4", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 35357 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_admin_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 keystone_admin_haproxy] will propagate my refresh event", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): Inserting rule 100 keystone_admin_haproxy ipv6", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 35357 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_admin_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 keystone_admin_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_admin_haproxy]", > "Debug: Concat::Fragment[haproxy-keystone_public_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_public_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_public_balancermember_keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_public_balancermember_keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): Inserting rule 100 keystone_public_haproxy ipv4", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 5000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy] will propagate my refresh event", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): Inserting rule 100 keystone_public_haproxy ipv6", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 5000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_public_haproxy]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 keystone_public_haproxy_ssl ipv4", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 keystone_public_haproxy_ssl ipv6", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 13000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-neutron_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-neutron_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-neutron_balancermember_neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-neutron_balancermember_neutron]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): Inserting rule 100 neutron_haproxy ipv4", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 9696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 neutron_haproxy] will propagate my refresh event", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): Inserting rule 100 neutron_haproxy ipv6", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 9696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 neutron_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 neutron_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 neutron_haproxy]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 neutron_haproxy_ssl ipv4", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 13696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 neutron_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 neutron_haproxy_ssl ipv6", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 13696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 neutron_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-cinder_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-cinder_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-cinder_balancermember_cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-cinder_balancermember_cinder]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): Inserting rule 100 cinder_haproxy ipv4", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 cinder_haproxy] will propagate my refresh event", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): Inserting rule 100 cinder_haproxy ipv6", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 cinder_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 cinder_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 cinder_haproxy]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 cinder_haproxy_ssl ipv4", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 13776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 cinder_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 cinder_haproxy_ssl ipv6", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 cinder_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-sahara_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-sahara_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-sahara_balancermember_sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-sahara_balancermember_sahara]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): Inserting rule 100 sahara_haproxy ipv4", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 8386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 sahara_haproxy] will propagate my refresh event", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): Inserting rule 100 sahara_haproxy ipv6", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 sahara_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 sahara_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 sahara_haproxy]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 sahara_haproxy_ssl ipv4", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 13386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 sahara_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 sahara_haproxy_ssl ipv6", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 sahara_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-glance_api_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-glance_api_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-glance_api_balancermember_glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-glance_api_balancermember_glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): Inserting rule 100 glance_api_haproxy ipv4", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 9292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy] will propagate my refresh event", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): Inserting rule 100 glance_api_haproxy ipv6", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 9292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 glance_api_haproxy]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 glance_api_haproxy_ssl ipv4", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 13292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 glance_api_haproxy_ssl ipv6", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 13292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_osapi_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_osapi_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_osapi_balancermember_nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_osapi_balancermember_nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): Inserting rule 100 nova_osapi_haproxy ipv4", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_osapi_haproxy ipv6", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_osapi_haproxy]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_osapi_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_osapi_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_placement_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_placement_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_placement_balancermember_nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_placement_balancermember_nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): Inserting rule 100 nova_placement_haproxy ipv4", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 8778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_placement_haproxy ipv6", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 8778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_placement_haproxy]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_placement_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 13778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_placement_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 19 --wait -t filter -p tcp -m multiport --dports 13778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_metadata_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_metadata_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_metadata_balancermember_nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_metadata_balancermember_nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): Inserting rule 100 nova_metadata_haproxy ipv4", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8775 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_metadata_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_metadata_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_metadata_haproxy ipv6", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8775 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_metadata_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_metadata_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_metadata_haproxy]", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_balancermember_nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_balancermember_nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): Inserting rule 100 nova_novncproxy_haproxy ipv4", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 6080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_novncproxy_haproxy ipv6", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 6080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_novncproxy_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_novncproxy_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 13080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-aodh_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-aodh_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-aodh_balancermember_aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-aodh_balancermember_aodh]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): Inserting rule 100 aodh_haproxy ipv4", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 aodh_haproxy] will propagate my refresh event", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): Inserting rule 100 aodh_haproxy ipv6", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 aodh_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 aodh_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 aodh_haproxy]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 aodh_haproxy_ssl ipv4", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 13042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 aodh_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 aodh_haproxy_ssl ipv6", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 aodh_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-panko_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-panko_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-panko_balancermember_panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-panko_balancermember_panko]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): Inserting rule 100 panko_haproxy ipv4", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 24 --wait -t filter -p tcp -m multiport --dports 8977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 panko_haproxy] will propagate my refresh event", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): Inserting rule 100 panko_haproxy ipv6", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 8977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 panko_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 panko_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 panko_haproxy]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 panko_haproxy_ssl ipv4", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 13977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 panko_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 panko_haproxy_ssl ipv6", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 26 --wait -t filter -p tcp -m multiport --dports 13977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 panko_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 panko_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-gnocchi_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-gnocchi_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-gnocchi_balancermember_gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-gnocchi_balancermember_gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): Inserting rule 100 gnocchi_haproxy ipv4", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 8041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy] will propagate my refresh event", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): Inserting rule 100 gnocchi_haproxy ipv6", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 gnocchi_haproxy]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 gnocchi_haproxy_ssl ipv4", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 13041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 gnocchi_haproxy_ssl ipv6", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 13041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_balancermember_swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_balancermember_swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): Inserting rule 100 swift_proxy_server_haproxy ipv4", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 31 --wait -t filter -p tcp -m multiport --dports 8080 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy] will propagate my refresh event", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): Inserting rule 100 swift_proxy_server_haproxy ipv6", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 32 --wait -t filter -p tcp -m multiport --dports 8080 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 swift_proxy_server_haproxy_ssl ipv4", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 32 --wait -t filter -p tcp -m multiport --dports 13808 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 swift_proxy_server_haproxy_ssl ipv6", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 33 --wait -t filter -p tcp -m multiport --dports 13808 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-heat_api_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_api_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-heat_api_balancermember_heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_api_balancermember_heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): Inserting rule 100 heat_api_haproxy ipv4", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 8004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy] will propagate my refresh event", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): Inserting rule 100 heat_api_haproxy ipv6", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 8004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_api_haproxy]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 heat_api_haproxy_ssl ipv4", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 13004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 heat_api_haproxy_ssl ipv6", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 13004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-heat_cfn_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_cfn_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-heat_cfn_balancermember_heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_cfn_balancermember_heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): Inserting rule 100 heat_cfn_haproxy ipv4", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8000 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy] will propagate my refresh event", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): Inserting rule 100 heat_cfn_haproxy ipv6", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8000 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_cfn_haproxy]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 heat_cfn_haproxy_ssl ipv4", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13005 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 heat_cfn_haproxy_ssl ipv6", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13005 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]", > "Debug: Class[Tripleo::Firewall::Post]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall::Post]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[998 log all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[998 log all]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[999 drop all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[999 drop all]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v4_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v4_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v6_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v6_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-pkz9kx returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-pkz9kx constraint list | grep location-ip-192.168.24.16 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-14aopmf returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-14aopmf resource show ip-192.168.24.16 > /dev/null 2>&1", > "Debug: Exists: resource ip-192.168.24.16 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-13juiep returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-13juiep resource create ip-192.168.24.16 IPaddr2 ip=192.168.24.16 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-13juiep diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-13juiep.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-192.168.24.16 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-192.168.24.16 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-p8w7vt returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-p8w7vt constraint location ip-192.168.24.16 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-p8w7vt diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-p8w7vt.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-cplx0f returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-cplx0f resource enable ip-192.168.24.16", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-cplx0f diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-cplx0f.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/Pcmk_resource[ip-192.168.24.16]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/Pcmk_resource[ip-192.168.24.16]: The container Pacemaker::Resource::Ip[control_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[control_vip]: Unscheduling all events on Pacemaker::Resource::Ip[control_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-s1gw1m returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-s1gw1m constraint list | grep location-ip-10.0.0.106 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1wbryv0 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1wbryv0 resource show ip-10.0.0.106 > /dev/null 2>&1", > "Debug: Exists: resource ip-10.0.0.106 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-v0txtz returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-v0txtz resource create ip-10.0.0.106 IPaddr2 ip=10.0.0.106 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-v0txtz diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-v0txtz.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-10.0.0.106 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-10.0.0.106 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-shof7i returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-shof7i constraint location ip-10.0.0.106 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-shof7i diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-shof7i.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-ymfjsn returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-ymfjsn resource enable ip-10.0.0.106", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-ymfjsn diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-ymfjsn.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/Pcmk_resource[ip-10.0.0.106]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/Pcmk_resource[ip-10.0.0.106]: The container Pacemaker::Resource::Ip[public_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[public_vip]: Unscheduling all events on Pacemaker::Resource::Ip[public_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1799kjw returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1799kjw constraint list | grep location-ip-172.17.1.26 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-paa840 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-paa840 resource show ip-172.17.1.26 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.1.26 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1njsoj7 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1njsoj7 resource create ip-172.17.1.26 IPaddr2 ip=172.17.1.26 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1njsoj7 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1njsoj7.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.1.26 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.1.26 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-mebiul returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-mebiul constraint location ip-172.17.1.26 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-mebiul diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-mebiul.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-rsjjgm returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-rsjjgm resource enable ip-172.17.1.26", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-rsjjgm diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-rsjjgm.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/Pcmk_resource[ip-172.17.1.26]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/Pcmk_resource[ip-172.17.1.26]: The container Pacemaker::Resource::Ip[redis_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[redis_vip]: Unscheduling all events on Pacemaker::Resource::Ip[redis_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-wcou89 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-wcou89 constraint list | grep location-ip-172.17.1.28 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1nwmdi3 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1nwmdi3 resource show ip-172.17.1.28 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.1.28 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-19j9x0m returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-19j9x0m resource create ip-172.17.1.28 IPaddr2 ip=172.17.1.28 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-19j9x0m diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-19j9x0m.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.1.28 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.1.28 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-r143rk returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-r143rk constraint location ip-172.17.1.28 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-r143rk diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-r143rk.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1oa75po returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1oa75po resource enable ip-172.17.1.28", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1oa75po diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1oa75po.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/Pcmk_resource[ip-172.17.1.28]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/Pcmk_resource[ip-172.17.1.28]: The container Pacemaker::Resource::Ip[internal_api_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[internal_api_vip]: Unscheduling all events on Pacemaker::Resource::Ip[internal_api_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ilzjmh returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ilzjmh constraint list | grep location-ip-172.17.3.10 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-nwvuq6 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-nwvuq6 resource show ip-172.17.3.10 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.3.10 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-sqbskk returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-sqbskk resource create ip-172.17.3.10 IPaddr2 ip=172.17.3.10 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-sqbskk diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-sqbskk.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.3.10 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.3.10 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-n7ztjh returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-n7ztjh constraint location ip-172.17.3.10 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-n7ztjh diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-n7ztjh.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-y3lz9s returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-y3lz9s resource enable ip-172.17.3.10", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-y3lz9s diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-y3lz9s.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/Pcmk_resource[ip-172.17.3.10]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/Pcmk_resource[ip-172.17.3.10]: The container Pacemaker::Resource::Ip[storage_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[storage_vip]: Unscheduling all events on Pacemaker::Resource::Ip[storage_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1e8u2bt returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1e8u2bt constraint list | grep location-ip-172.17.4.18 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-uv8fjm returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-uv8fjm resource show ip-172.17.4.18 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.4.18 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1i9kfz2 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1i9kfz2 resource create ip-172.17.4.18 IPaddr2 ip=172.17.4.18 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1i9kfz2 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1i9kfz2.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.4.18 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.4.18 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1lgao60 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1lgao60 constraint location ip-172.17.4.18 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1lgao60 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1lgao60.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1tn1q0b returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1tn1q0b resource enable ip-172.17.4.18", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1tn1q0b diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1tn1q0b.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/Pcmk_resource[ip-172.17.4.18]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/Pcmk_resource[ip-172.17.4.18]: The container Pacemaker::Resource::Ip[storage_mgmt_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[storage_mgmt_vip]: Unscheduling all events on Pacemaker::Resource::Ip[storage_mgmt_vip]", > "Debug: Pacemaker::Resource::Bundle[haproxy-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Bundle[haproxy-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-6wqx4b returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-6wqx4b constraint list | grep location-haproxy-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-14v9asr returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-14v9asr resource show haproxy-bundle > /dev/null 2>&1", > "Debug: Exists: bundle haproxy-bundle exists 1 location exists 1 deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1yl8nkm returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1yl8nkm resource bundle create haproxy-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest replicas=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=haproxy-cfg-files source-dir=/var/lib/kolla/config_files/haproxy.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=haproxy-cfg-data source-dir=/var/lib/config-data/puppet-generated/haproxy/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=haproxy-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=haproxy-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=haproxy-var-lib source-dir=/var/lib/haproxy target-dir=/var/lib/haproxy options=rw storage-map id=haproxy-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=haproxy-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=haproxy-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=haproxy-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=haproxy-dev-log source-dir=/dev/log target-dir=/dev/log options=rw --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1yl8nkm diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1yl8nkm.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-vmfggz returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-vmfggz constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-vmfggz diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-vmfggz.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-bpz40e returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-bpz40e resource enable haproxy-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-bpz40e diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-bpz40e.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/Pcmk_bundle[haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/Pcmk_bundle[haproxy-bundle]: The container Pacemaker::Resource::Bundle[haproxy-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[haproxy-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-c5qgeo returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-c5qgeo constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-n45r60 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-n45r60 constraint order start ip-192.168.24.16 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-n45r60 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-n45r60.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/Pcmk_constraint[order-ip-192.168.24.16-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/Pcmk_constraint[order-ip-192.168.24.16-haproxy-bundle]: The container Pacemaker::Constraint::Order[control_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[control_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-228314 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-228314 constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1q0yw81 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1q0yw81 constraint colocation add ip-192.168.24.16 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1q0yw81 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1q0yw81.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Colocation[control_vip-with-haproxy]/Pcmk_constraint[colo-ip-192.168.24.16-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Colocation[control_vip-with-haproxy]/Pcmk_constraint[colo-ip-192.168.24.16-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[control_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[control_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1p1axx1 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1p1axx1 constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1jvk5vd returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1jvk5vd constraint order start ip-10.0.0.106 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1jvk5vd diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1jvk5vd.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/Pcmk_constraint[order-ip-10.0.0.106-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/Pcmk_constraint[order-ip-10.0.0.106-haproxy-bundle]: The container Pacemaker::Constraint::Order[public_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[public_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-pqd3sq returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-pqd3sq constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1f9v8tn returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1f9v8tn constraint colocation add ip-10.0.0.106 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1f9v8tn diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1f9v8tn.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Colocation[public_vip-with-haproxy]/Pcmk_constraint[colo-ip-10.0.0.106-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Colocation[public_vip-with-haproxy]/Pcmk_constraint[colo-ip-10.0.0.106-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[public_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[public_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-pen8iz returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-pen8iz constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-tyj0i returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-tyj0i constraint order start ip-172.17.1.26 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-tyj0i diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-tyj0i.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.26-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.26-haproxy-bundle]: The container Pacemaker::Constraint::Order[redis_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[redis_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1kslqi9 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1kslqi9 constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-19susen returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-19susen constraint colocation add ip-172.17.1.26 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-19susen diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-19susen.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.26-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.26-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[redis_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1vff0zz returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1vff0zz constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-vz6pyc returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-vz6pyc constraint order start ip-172.17.1.28 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-vz6pyc diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-vz6pyc.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.28-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.28-haproxy-bundle]: The container Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-ur4q6j returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-ur4q6j constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1i9ne50 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1i9ne50 constraint colocation add ip-172.17.1.28 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1i9ne50 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1i9ne50.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.28-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.28-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-r6z85j returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-r6z85j constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-t8m188 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-t8m188 constraint order start ip-172.17.3.10 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-t8m188 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-t8m188.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.3.10-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.3.10-haproxy-bundle]: The container Pacemaker::Constraint::Order[storage_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[storage_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-6gfyya returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-6gfyya constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1alxjft returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1alxjft constraint colocation add ip-172.17.3.10 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1alxjft diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1alxjft.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.3.10-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.3.10-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[storage_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-v97ut9 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-v97ut9 constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-xl4cwk returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-xl4cwk constraint order start ip-172.17.4.18 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-xl4cwk diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-xl4cwk.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.4.18-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.4.18-haproxy-bundle]: The container Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-fbv9l5 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-fbv9l5 constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1gujans returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1gujans constraint colocation add ip-172.17.4.18 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1gujans diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1gujans.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.4.18-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.4.18-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]", > "Info: Computing checksum on file /etc/haproxy/haproxy.cfg", > "Info: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Filebucketed /etc/haproxy/haproxy.cfg to puppet with sum 1f337186b0e1ba5ee82760cb437fb810", > "Debug: Executing: '/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg20181002-8-1p3ihn0 -c'", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: [WARNING] 274/125031 (3161) : parsing [/etc/haproxy/haproxy.cfg20181002-8-1p3ihn0:170] : HTTP log/header format not usable with proxy 'nova_novncproxy' (needs 'mode http').", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Configuration file is valid", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}e632867547a31f39d12662dd19c6e877'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: The container Concat[/etc/haproxy/haproxy.cfg] will propagate my refresh event", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: The container /etc/haproxy/haproxy.cfg will propagate my refresh event", > "Debug: /etc/haproxy/haproxy.cfg: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /etc/haproxy/haproxy.cfg: Resource is being skipped, unscheduling all events", > "Info: /etc/haproxy/haproxy.cfg: Unscheduling all events on /etc/haproxy/haproxy.cfg", > "Info: Concat[/etc/haproxy/haproxy.cfg]: Unscheduling all events on Concat[/etc/haproxy/haproxy.cfg]", > "Debug: Haproxy::Service[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Service[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::end]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Finishing transaction 48556420", > "Notice: Applied catalog in 169.66 seconds", > " Total: 90", > " Success: 90", > " Skipped: 36", > " Out of sync: 89", > " Changed: 89", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File: 0.12", > " Pcmk bundle: 11.47", > " Last run: 1538484632", > " Total: 173.88", > " Firewall: 22.05", > " Pcmk constraint: 42.27", > " Pcmk property: 5.83", > " Config retrieval: 6.28", > " Pcmk resource: 85.86", > " Config: 1538484456", > "Debug: Finishing transaction 48706480", > "+ TAGS=file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", > "+ CONFIG='include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation -e 'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle'", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications." > ] >} >2018-10-02 08:50:51,672 p=1004 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks2.json exists] ******** >2018-10-02 08:50:51,672 p=1004 u=mistral | Tuesday 02 October 2018 08:50:51 -0400 (0:00:14.108) 0:22:04.405 ******* >2018-10-02 08:50:51,936 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:50:51,949 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:50:51,964 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:50:52,001 p=1004 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 2] ******************** >2018-10-02 08:50:52,001 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.329) 0:22:04.734 ******* >2018-10-02 08:50:52,039 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:52,068 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:52,083 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:52,109 p=1004 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (bootstrap tasks) for step 2] *** >2018-10-02 08:50:52,109 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.108) 0:22:04.843 ******* >2018-10-02 08:50:52,145 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:50:52,175 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:50:52,187 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:50:52,194 p=1004 u=mistral | PLAY [External deployment step 3] ********************************************** >2018-10-02 08:50:52,213 p=1004 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-10-02 08:50:52,214 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.104) 0:22:04.947 ******* >2018-10-02 08:50:52,303 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,358 p=1004 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-10-02 08:50:52,358 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.144) 0:22:05.092 ******* >2018-10-02 08:50:52,393 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,397 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,403 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,417 p=1004 u=mistral | TASK [generate inventory] ****************************************************** >2018-10-02 08:50:52,417 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.058) 0:22:05.150 ******* >2018-10-02 08:50:52,437 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,451 p=1004 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-10-02 08:50:52,451 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.033) 0:22:05.184 ******* >2018-10-02 08:50:52,476 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,489 p=1004 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-10-02 08:50:52,489 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.038) 0:22:05.223 ******* >2018-10-02 08:50:52,511 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,527 p=1004 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-10-02 08:50:52,527 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.037) 0:22:05.261 ******* >2018-10-02 08:50:52,552 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,567 p=1004 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-10-02 08:50:52,568 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.040) 0:22:05.301 ******* >2018-10-02 08:50:52,590 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,604 p=1004 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-10-02 08:50:52,604 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.036) 0:22:05.338 ******* >2018-10-02 08:50:52,624 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,639 p=1004 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-10-02 08:50:52,639 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.034) 0:22:05.372 ******* >2018-10-02 08:50:52,660 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,674 p=1004 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-10-02 08:50:52,674 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.035) 0:22:05.408 ******* >2018-10-02 08:50:52,694 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,708 p=1004 u=mistral | TASK [set ceph-ansible params from Heat] *************************************** >2018-10-02 08:50:52,708 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.033) 0:22:05.441 ******* >2018-10-02 08:50:52,728 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,743 p=1004 u=mistral | TASK [set ceph-ansible playbooks] ********************************************** >2018-10-02 08:50:52,743 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.035) 0:22:05.477 ******* >2018-10-02 08:50:52,764 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,777 p=1004 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-10-02 08:50:52,778 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.034) 0:22:05.511 ******* >2018-10-02 08:50:52,805 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,818 p=1004 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-10-02 08:50:52,818 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.040) 0:22:05.552 ******* >2018-10-02 08:50:52,846 p=1004 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,861 p=1004 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-10-02 08:50:52,862 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.043) 0:22:05.595 ******* >2018-10-02 08:50:52,881 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,895 p=1004 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-10-02 08:50:52,895 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.033) 0:22:05.629 ******* >2018-10-02 08:50:52,915 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,928 p=1004 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-10-02 08:50:52,928 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.033) 0:22:05.662 ******* >2018-10-02 08:50:52,948 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,962 p=1004 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-10-02 08:50:52,962 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.033) 0:22:05.695 ******* >2018-10-02 08:50:52,981 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:52,994 p=1004 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 08:50:52,995 p=1004 u=mistral | Tuesday 02 October 2018 08:50:52 -0400 (0:00:00.032) 0:22:05.728 ******* >2018-10-02 08:50:53,014 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,029 p=1004 u=mistral | TASK [Create temp file for prepare parameter] ********************************** >2018-10-02 08:50:53,029 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.034) 0:22:05.762 ******* >2018-10-02 08:50:53,049 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,063 p=1004 u=mistral | TASK [Create temp file for role data] ****************************************** >2018-10-02 08:50:53,063 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.034) 0:22:05.797 ******* >2018-10-02 08:50:53,084 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,098 p=1004 u=mistral | TASK [Write ContainerImagePrepare parameter file] ****************************** >2018-10-02 08:50:53,098 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.034) 0:22:05.832 ******* >2018-10-02 08:50:53,124 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,138 p=1004 u=mistral | TASK [Write role data file] **************************************************** >2018-10-02 08:50:53,138 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.039) 0:22:05.872 ******* >2018-10-02 08:50:53,171 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,185 p=1004 u=mistral | TASK [Run tripleo-container-image-prepare] ************************************* >2018-10-02 08:50:53,185 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.046) 0:22:05.918 ******* >2018-10-02 08:50:53,208 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,222 p=1004 u=mistral | TASK [Delete param file] ******************************************************* >2018-10-02 08:50:53,222 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.037) 0:22:05.955 ******* >2018-10-02 08:50:53,244 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,259 p=1004 u=mistral | TASK [Delete role file] ******************************************************** >2018-10-02 08:50:53,259 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.037) 0:22:05.992 ******* >2018-10-02 08:50:53,280 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,295 p=1004 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-10-02 08:50:53,296 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.036) 0:22:06.029 ******* >2018-10-02 08:50:53,319 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,335 p=1004 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-10-02 08:50:53,335 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.039) 0:22:06.068 ******* >2018-10-02 08:50:53,356 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,372 p=1004 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-10-02 08:50:53,372 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.036) 0:22:06.105 ******* >2018-10-02 08:50:53,393 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,409 p=1004 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-10-02 08:50:53,410 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.037) 0:22:06.143 ******* >2018-10-02 08:50:53,431 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,438 p=1004 u=mistral | PLAY [Overcloud deploy step tasks for 3] *************************************** >2018-10-02 08:50:53,446 p=1004 u=mistral | PLAY [Overcloud common deploy step tasks 3] ************************************ >2018-10-02 08:50:53,480 p=1004 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-10-02 08:50:53,480 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.070) 0:22:06.214 ******* >2018-10-02 08:50:53,524 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,554 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,569 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,594 p=1004 u=mistral | TASK [Delete existing /var/lib/tripleo-config/check-mode directory for check mode] *** >2018-10-02 08:50:53,594 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.113) 0:22:06.328 ******* >2018-10-02 08:50:53,630 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,660 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,675 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,701 p=1004 u=mistral | TASK [Create /var/lib/tripleo-config/check-mode directory for check mode] ****** >2018-10-02 08:50:53,702 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.107) 0:22:06.435 ******* >2018-10-02 08:50:53,736 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,765 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,781 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,807 p=1004 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-10-02 08:50:53,807 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.105) 0:22:06.540 ******* >2018-10-02 08:50:53,841 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,874 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,889 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,918 p=1004 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 08:50:53,918 p=1004 u=mistral | Tuesday 02 October 2018 08:50:53 -0400 (0:00:00.111) 0:22:06.652 ******* >2018-10-02 08:50:53,956 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:53,986 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,001 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,029 p=1004 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 08:50:54,029 p=1004 u=mistral | Tuesday 02 October 2018 08:50:54 -0400 (0:00:00.110) 0:22:06.762 ******* >2018-10-02 08:50:54,065 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:50:54,094 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:50:54,108 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:50:54,134 p=1004 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-10-02 08:50:54,135 p=1004 u=mistral | Tuesday 02 October 2018 08:50:54 -0400 (0:00:00.105) 0:22:06.868 ******* >2018-10-02 08:50:54,169 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,199 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,221 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,253 p=1004 u=mistral | TASK [Delete existing /var/lib/docker-puppet/check-mode for check mode] ******** >2018-10-02 08:50:54,253 p=1004 u=mistral | Tuesday 02 October 2018 08:50:54 -0400 (0:00:00.118) 0:22:06.986 ******* >2018-10-02 08:50:54,289 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,318 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,332 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,359 p=1004 u=mistral | TASK [Create /var/lib/docker-puppet/check-mode for check mode] ***************** >2018-10-02 08:50:54,359 p=1004 u=mistral | Tuesday 02 October 2018 08:50:54 -0400 (0:00:00.106) 0:22:07.093 ******* >2018-10-02 08:50:54,392 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,415 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,426 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,448 p=1004 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-10-02 08:50:54,448 p=1004 u=mistral | Tuesday 02 October 2018 08:50:54 -0400 (0:00:00.088) 0:22:07.182 ******* >2018-10-02 08:50:54,476 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,501 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,512 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,533 p=1004 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 08:50:54,533 p=1004 u=mistral | Tuesday 02 October 2018 08:50:54 -0400 (0:00:00.085) 0:22:07.267 ******* >2018-10-02 08:50:54,566 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,593 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,608 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,637 p=1004 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 08:50:54,638 p=1004 u=mistral | Tuesday 02 October 2018 08:50:54 -0400 (0:00:00.104) 0:22:07.371 ******* >2018-10-02 08:50:54,672 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:50:54,699 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:50:54,712 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:50:54,736 p=1004 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-10-02 08:50:54,737 p=1004 u=mistral | Tuesday 02 October 2018 08:50:54 -0400 (0:00:00.099) 0:22:07.470 ******* >2018-10-02 08:50:54,769 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,795 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,807 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,832 p=1004 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-10-02 08:50:54,833 p=1004 u=mistral | Tuesday 02 October 2018 08:50:54 -0400 (0:00:00.095) 0:22:07.566 ******* >2018-10-02 08:50:54,864 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,898 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,912 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:54,939 p=1004 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-10-02 08:50:54,939 p=1004 u=mistral | Tuesday 02 October 2018 08:50:54 -0400 (0:00:00.106) 0:22:07.672 ******* >2018-10-02 08:50:55,011 p=1004 u=mistral | skipping: [controller-0] => (item=create_swift_secret.sh) => {"changed": false, "item": ["create_swift_secret.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,014 p=1004 u=mistral | skipping: [controller-0] => (item=docker_puppet_apply.sh) => {"changed": false, "item": ["docker_puppet_apply.sh", {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,015 p=1004 u=mistral | skipping: [controller-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": false, "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,017 p=1004 u=mistral | skipping: [controller-0] => (item=nova_api_discover_hosts.sh) => {"changed": false, "item": ["nova_api_discover_hosts.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,018 p=1004 u=mistral | skipping: [controller-0] => (item=nova_api_ensure_default_cell.sh) => {"changed": false, "item": ["nova_api_ensure_default_cell.sh", {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,023 p=1004 u=mistral | skipping: [controller-0] => (item=set_swift_keymaster_key_id.sh) => {"changed": false, "item": ["set_swift_keymaster_key_id.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,052 p=1004 u=mistral | skipping: [compute-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": false, "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,060 p=1004 u=mistral | skipping: [compute-0] => (item=nova_statedir_ownership.py) => {"changed": false, "item": ["nova_statedir_ownership.py", {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,089 p=1004 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-10-02 08:50:55,089 p=1004 u=mistral | Tuesday 02 October 2018 08:50:55 -0400 (0:00:00.149) 0:22:07.822 ******* >2018-10-02 08:50:55,129 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,131 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,131 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,162 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,163 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,163 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,164 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,165 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,165 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,165 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,169 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,176 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,181 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,182 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,185 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,190 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,196 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,203 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,214 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,219 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,220 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,245 p=1004 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-10-02 08:50:55,246 p=1004 u=mistral | Tuesday 02 October 2018 08:50:55 -0400 (0:00:00.156) 0:22:07.979 ******* >2018-10-02 08:50:55,278 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,309 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,323 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:50:55,404 p=1004 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-10-02 08:50:55,405 p=1004 u=mistral | Tuesday 02 October 2018 08:50:55 -0400 (0:00:00.159) 0:22:08.138 ******* >2018-10-02 08:50:55,442 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,471 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,487 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,514 p=1004 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-10-02 08:50:55,514 p=1004 u=mistral | Tuesday 02 October 2018 08:50:55 -0400 (0:00:00.109) 0:22:08.248 ******* >2018-10-02 08:50:55,587 p=1004 u=mistral | skipping: [ceph-0] => (item=step_1) => {"changed": false, "item": ["step_1", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,589 p=1004 u=mistral | skipping: [ceph-0] => (item=step_2) => {"changed": false, "item": ["step_2", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,590 p=1004 u=mistral | skipping: [ceph-0] => (item=step_3) => {"changed": false, "item": ["step_3", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,599 p=1004 u=mistral | skipping: [controller-0] => (item=step_1) => {"changed": false, "item": ["step_1", {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=mG0FjSjrDN8mWwf9YJSsEJGuQ", "DB_ROOT_PASSWORD=5BSzxzKG9a"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=fbxKGjRmnA14UIbGdAmW"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,606 p=1004 u=mistral | skipping: [controller-0] => (item=step_2) => {"changed": false, "item": ["step_2", {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,609 p=1004 u=mistral | skipping: [compute-0] => (item=step_1) => {"changed": false, "item": ["step_1", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,610 p=1004 u=mistral | skipping: [ceph-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,611 p=1004 u=mistral | skipping: [ceph-0] => (item=step_5) => {"changed": false, "item": ["step_5", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,618 p=1004 u=mistral | skipping: [controller-0] => (item=step_3) => {"changed": false, "item": ["step_3", {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "Q4TKZfrksKpvC1QXOQA8ciL7S"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "user": "root", "volumes": ["/srv/node:/srv/node"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,621 p=1004 u=mistral | skipping: [ceph-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,622 p=1004 u=mistral | skipping: [compute-0] => (item=step_2) => {"changed": false, "item": ["step_2", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,632 p=1004 u=mistral | skipping: [controller-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,637 p=1004 u=mistral | skipping: [compute-0] => (item=step_3) => {"changed": false, "item": ["step_3", {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,643 p=1004 u=mistral | skipping: [controller-0] => (item=step_5) => {"changed": false, "item": ["step_5", {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_api_online_migrations": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db online_data_migrations'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}, "nova_online_migrations": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db online_data_migrations'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,646 p=1004 u=mistral | skipping: [compute-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '4398e5b0-c63c-11e8-b95a-525400c8bd81' --base64 'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,648 p=1004 u=mistral | skipping: [controller-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,659 p=1004 u=mistral | skipping: [compute-0] => (item=step_5) => {"changed": false, "item": ["step_5", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,659 p=1004 u=mistral | skipping: [compute-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,685 p=1004 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-10-02 08:50:55,686 p=1004 u=mistral | Tuesday 02 October 2018 08:50:55 -0400 (0:00:00.171) 0:22:08.419 ******* >2018-10-02 08:50:55,722 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,754 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,771 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,798 p=1004 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-10-02 08:50:55,798 p=1004 u=mistral | Tuesday 02 October 2018 08:50:55 -0400 (0:00:00.112) 0:22:08.531 ******* >2018-10-02 08:50:55,858 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,902 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_compute.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_compute.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,908 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,914 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,920 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,926 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova-migration-target.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova-migration-target.json", {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,931 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_compute.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_compute.json", {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,937 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_libvirt.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_libvirt.json", {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:55,943 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_virtlogd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_virtlogd.json", {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,022 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,029 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_evaluator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_evaluator.json", {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,035 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_listener.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_listener.json", {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,040 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_notifier.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_notifier.json", {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,046 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_central.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_central.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,052 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_notification.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_notification.json", {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,057 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,064 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,069 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_backup.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_backup.json", {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,075 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_scheduler.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_scheduler.json", {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,081 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_volume.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_volume.json", {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,088 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/clustercheck.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/clustercheck.json", {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,093 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/glance_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/glance_api.json", {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,099 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/glance_api_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/glance_api_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,105 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,111 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_db_sync.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_db_sync.json", {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,119 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_metricd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_metricd.json", {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,125 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_statsd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_statsd.json", {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,131 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/haproxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/haproxy.json", {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,138 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,144 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cfn.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api_cfn.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,151 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,158 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_engine.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_engine.json", {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,165 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/horizon.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/horizon.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,170 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,177 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/keystone.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/keystone.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,183 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/keystone_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/keystone_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,190 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,197 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/mysql.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/mysql.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,203 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_api.json", {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,211 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_dhcp.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_dhcp.json", {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,216 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_l3_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_l3_agent.json", {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,223 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_metadata_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_metadata_agent.json", {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,228 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,236 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_server_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_server_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,241 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,248 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,254 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_conductor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_conductor.json", {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,261 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_consoleauth.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_consoleauth.json", {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,267 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_metadata.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_metadata.json", {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,273 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_placement.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_placement.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,279 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_scheduler.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_scheduler.json", {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,285 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_vnc_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_vnc_proxy.json", {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "root:nova", "path": "/etc/pki/tls/private/novnc_proxy.key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,291 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/panko_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/panko_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,297 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/rabbitmq.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/rabbitmq.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,304 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/redis.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/redis.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,310 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/redis_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/redis_tls_proxy.json", {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"optional": true, "owner": "root:root", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "root:root", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,315 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/sahara-api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/sahara-api.json", {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,321 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/sahara-engine.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/sahara-engine.json", {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,328 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_auditor.json", {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,334 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_reaper.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_reaper.json", {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,340 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_replicator.json", {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,346 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_server.json", {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,353 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_auditor.json", {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,359 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_replicator.json", {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,365 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_server.json", {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,371 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_updater.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_updater.json", {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,376 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_auditor.json", {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,383 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_expirer.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_expirer.json", {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,388 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_replicator.json", {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,395 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_server.json", {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,402 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_updater.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_updater.json", {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,407 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_proxy.json", {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,412 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,419 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_rsync.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_rsync.json", {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,456 p=1004 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-10-02 08:50:56,456 p=1004 u=mistral | Tuesday 02 October 2018 08:50:56 -0400 (0:00:00.658) 0:22:09.189 ******* >2018-10-02 08:50:56,470 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:50:56,502 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:50:56,531 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:50:56,561 p=1004 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-10-02 08:50:56,562 p=1004 u=mistral | Tuesday 02 October 2018 08:50:56 -0400 (0:00:00.105) 0:22:09.295 ******* >2018-10-02 08:50:56,624 p=1004 u=mistral | skipping: [controller-0] => (item=step_3) => {"changed": false, "item": ["step_3", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,625 p=1004 u=mistral | skipping: [controller-0] => (item=step_4) => {"changed": false, "item": ["step_4", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "config_volume": "cinder_init_tasks", "puppet_tags": "cinder_config,cinder_type,file,concat,file_line", "step_config": "include ::tripleo::profile::base::cinder::api", "volumes": ["/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro"]}]], "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,665 p=1004 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-10-02 08:50:56,665 p=1004 u=mistral | Tuesday 02 October 2018 08:50:56 -0400 (0:00:00.103) 0:22:09.398 ******* >2018-10-02 08:50:56,708 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,743 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,760 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,788 p=1004 u=mistral | TASK [Check for /etc/puppet/check-mode directory for check mode] *************** >2018-10-02 08:50:56,788 p=1004 u=mistral | Tuesday 02 October 2018 08:50:56 -0400 (0:00:00.123) 0:22:09.521 ******* >2018-10-02 08:50:56,822 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,851 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,869 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,900 p=1004 u=mistral | TASK [Create /etc/puppet/check-mode/hieradata directory for check mode] ******** >2018-10-02 08:50:56,900 p=1004 u=mistral | Tuesday 02 October 2018 08:50:56 -0400 (0:00:00.111) 0:22:09.633 ******* >2018-10-02 08:50:56,934 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,959 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,969 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:56,993 p=1004 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-10-02 08:50:56,993 p=1004 u=mistral | Tuesday 02 October 2018 08:50:56 -0400 (0:00:00.093) 0:22:09.727 ******* >2018-10-02 08:50:57,591 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538484657.04-2652236123039/source", "state": "file", "uid": 0} >2018-10-02 08:50:57,619 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538484657.07-242477761138070/source", "state": "file", "uid": 0} >2018-10-02 08:50:57,659 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538484657.1-278265143277866/source", "state": "file", "uid": 0} >2018-10-02 08:50:57,688 p=1004 u=mistral | TASK [Create puppet check-mode files if they don't exist for check mode] ******* >2018-10-02 08:50:57,688 p=1004 u=mistral | Tuesday 02 October 2018 08:50:57 -0400 (0:00:00.694) 0:22:10.421 ******* >2018-10-02 08:50:57,723 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:57,755 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:57,767 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:50:57,793 p=1004 u=mistral | TASK [Run puppet host configuration for step 3] ******************************** >2018-10-02 08:50:57,793 p=1004 u=mistral | Tuesday 02 October 2018 08:50:57 -0400 (0:00:00.105) 0:22:10.527 ******* >2018-10-02 08:51:07,793 p=1004 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:51:08,571 p=1004 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:51:12,610 p=1004 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:51:12,637 p=1004 u=mistral | TASK [Debug output for task: Run puppet host configuration for step 3] ********* >2018-10-02 08:51:12,637 p=1004 u=mistral | Tuesday 02 October 2018 08:51:12 -0400 (0:00:14.843) 0:22:25.370 ******* >2018-10-02 08:51:12,767 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.03 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seltype: seltype changed 'etc_t' to 'system_conf_t'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seltype: seltype changed 'etc_t' to 'system_conf_t'", > "Notice: Applied catalog in 3.74 seconds", > "Changes:", > " Total: 4", > "Events:", > " Success: 4", > "Resources:", > " Total: 216", > " Corrective change: 3", > " Out of sync: 4", > " Changed: 4", > "Time:", > " Concat file: 0.00", > " File line: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.02", > " Augeas: 0.02", > " File: 0.14", > " Service: 0.38", > " Pcmk property: 0.39", > " Pcmk resource default: 0.39", > " Package: 0.39", > " Exec: 0.90", > " Last run: 1538484672", > " Config retrieval: 3.57", > " Total: 6.21", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1538484664", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:51:12,791 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.95 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.18 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 134", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Firewall: 0.01", > " Sysctl: 0.01", > " Augeas: 0.01", > " File: 0.09", > " Service: 0.11", > " Exec: 0.21", > " Package: 0.24", > " Last run: 1538484668", > " Config retrieval: 2.28", > " Total: 2.97", > "Version:", > " Config: 1538484664", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:51:12,802 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.75 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.29 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 140", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.10", > " Service: 0.12", > " Exec: 0.25", > " Package: 0.26", > " Last run: 1538484667", > " Config retrieval: 2.04", > " Total: 2.80", > "Version:", > " Config: 1538484664", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:51:12,830 p=1004 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 3] ***************** >2018-10-02 08:51:12,831 p=1004 u=mistral | Tuesday 02 October 2018 08:51:12 -0400 (0:00:00.193) 0:22:25.564 ******* >2018-10-02 08:51:12,862 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:51:12,892 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:51:12,904 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:51:12,930 p=1004 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (generate config) during step 3] *** >2018-10-02 08:51:12,931 p=1004 u=mistral | Tuesday 02 October 2018 08:51:12 -0400 (0:00:00.100) 0:22:25.664 ******* >2018-10-02 08:51:12,962 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:51:12,991 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:51:13,003 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:51:13,027 p=1004 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 08:51:13,027 p=1004 u=mistral | Tuesday 02 October 2018 08:51:13 -0400 (0:00:00.096) 0:22:25.760 ******* >2018-10-02 08:51:13,058 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:51:13,084 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:51:13,097 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:51:13,121 p=1004 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 08:51:13,121 p=1004 u=mistral | Tuesday 02 October 2018 08:51:13 -0400 (0:00:00.093) 0:22:25.854 ******* >2018-10-02 08:51:13,151 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:51:13,178 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:51:13,196 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:51:13,225 p=1004 u=mistral | TASK [Start containers for step 3] ********************************************* >2018-10-02 08:51:13,226 p=1004 u=mistral | Tuesday 02 October 2018 08:51:13 -0400 (0:00:00.104) 0:22:25.959 ******* >2018-10-02 08:51:13,782 p=1004 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:51:41,572 p=1004 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:52:32,225 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:52:32,255 p=1004 u=mistral | TASK [Debug output for task: Start containers for step 3] ********************** >2018-10-02 08:52:32,255 p=1004 u=mistral | Tuesday 02 October 2018 08:52:32 -0400 (0:01:19.029) 0:23:44.988 ******* >2018-10-02 08:52:32,344 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-10-02 08:52:32,367 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "5fcda0d83a5e: Already exists", > "37913a3798ef: Pulling fs layer", > "37913a3798ef: Verifying Checksum", > "37913a3798ef: Download complete", > "37913a3798ef: Pull complete", > "Digest: sha256:ad00dd55dbd675454a7944b820c9786695e12cbe7c8fc03322559884018fda89", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-account ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-account", > "119515329f22: Already exists", > "ccb5b1882a13: Pulling fs layer", > "ccb5b1882a13: Download complete", > "ccb5b1882a13: Pull complete", > "Digest: sha256:61461e60a71a7b4b691e1a82c392a6317afa06ea96c37871eaa37d787607bfb0", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-object ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-object", > "bc6c70aa7dca: Pulling fs layer", > "bc6c70aa7dca: Verifying Checksum", > "bc6c70aa7dca: Download complete", > "bc6c70aa7dca: Pull complete", > "Digest: sha256:8140be6cc7d227e825b481a22e75230a9a638bb149491972350c409966315d1f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", > "stdout: ", > "stdout: b21c86b9e90f7939f70af17e2db2ee5f1baa1a37a9fec921410bcd4725554801", > "stdout: 2018-10-02 12:51:18.313 11 WARNING oslo_config.cfg [-] Deprecated: Option \"db_backend\" from group \"DEFAULT\" is deprecated. Use option \"backend\" from group \"database\".\u001b[00m", > "2018-10-02 12:51:18.395 11 INFO migrate.versioning.api [-] 70 -> 71... \u001b[00m", > "2018-10-02 12:51:18.557 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.557 11 INFO migrate.versioning.api [-] 71 -> 72... \u001b[00m", > "2018-10-02 12:51:18.589 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.589 11 INFO migrate.versioning.api [-] 72 -> 73... \u001b[00m", > "2018-10-02 12:51:18.630 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.630 11 INFO migrate.versioning.api [-] 73 -> 74... \u001b[00m", > "2018-10-02 12:51:18.635 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.636 11 INFO migrate.versioning.api [-] 74 -> 75... \u001b[00m", > "2018-10-02 12:51:18.642 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.642 11 INFO migrate.versioning.api [-] 75 -> 76... \u001b[00m", > "2018-10-02 12:51:18.648 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.648 11 INFO migrate.versioning.api [-] 76 -> 77... \u001b[00m", > "2018-10-02 12:51:18.655 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.655 11 INFO migrate.versioning.api [-] 77 -> 78... \u001b[00m", > "2018-10-02 12:51:18.662 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.662 11 INFO migrate.versioning.api [-] 78 -> 79... \u001b[00m", > "2018-10-02 12:51:18.876 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.876 11 INFO migrate.versioning.api [-] 79 -> 80... \u001b[00m", > "2018-10-02 12:51:18.924 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.924 11 INFO migrate.versioning.api [-] 80 -> 81... \u001b[00m", > "2018-10-02 12:51:18.930 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.930 11 INFO migrate.versioning.api [-] 81 -> 82... \u001b[00m", > "2018-10-02 12:51:18.936 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.936 11 INFO migrate.versioning.api [-] 82 -> 83... \u001b[00m", > "2018-10-02 12:51:18.942 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.942 11 INFO migrate.versioning.api [-] 83 -> 84... \u001b[00m", > "2018-10-02 12:51:18.948 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.948 11 INFO migrate.versioning.api [-] 84 -> 85... \u001b[00m", > "2018-10-02 12:51:18.954 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-10-02 12:51:18.955 11 INFO migrate.versioning.api [-] 85 -> 86... \u001b[00m", > "2018-10-02 12:51:19.003 11 INFO migrate.versioning.api [-] done\u001b[00m", > "stdout: \u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend\u001b[0m", > "\u001b[mNotice: Compiled catalog for controller-0.localdomain in environment production in 1.49 seconds\u001b[0m", > "\u001b[0;32mInfo: Applying configuration version '1538484687'\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]/Vs_bridge[br-ex]/external_ids: external_ids changed 'PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5),PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)' to 'bridge-id=br-ex'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]/Vs_bridge[br-isolated]/external_ids: external_ids changed 'PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5),PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)' to 'bridge-id=br-isolated'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]\u001b[0m", > "\u001b[0;32mInfo: Creating state file /var/lib/puppet/state/state.yaml\u001b[0m", > "\u001b[mNotice: Applied catalog in 0.26 seconds\u001b[0m", > "stderr: \u001b[1;33mWarning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found\u001b[0m", > "\u001b[1;33mWarning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found\u001b[0m", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "\u001b[1;33mWarning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)\u001b[0m", > "\u001b[1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')\u001b[0m", > "stderr: Deprecated: Option \"logdir\" from group \"DEFAULT\" is deprecated. Use option \"log-dir\" from group \"DEFAULT\".", > "stdout: Upgraded database to: rocky_expand02, current revision(s): rocky_expand02", > "Database migration is up to date. No migration needed.", > "Upgraded database to: rocky_contract02, current revision(s): rocky_contract02", > "Database is synced successfully.", > "stderr: + sudo -E kolla_set_configs", > "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Deleting /etc/glance/glance-api.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/glance/glance-api.conf to /etc/glance/glance-api.conf", > "INFO:__main__:Deleting /etc/glance/glance-cache.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/glance/glance-cache.conf to /etc/glance/glance-cache.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/tripleo.cnf to /etc/my.cnf.d/tripleo.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.conf to /etc/ceph/ceph.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.admin.keyring to /etc/ceph/ceph.client.admin.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mon.keyring to /etc/ceph/ceph.mon.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mgr.controller-0.keyring to /etc/ceph/ceph.mgr.controller-0.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.manila.keyring to /etc/ceph/ceph.client.manila.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.radosgw.keyring to /etc/ceph/ceph.client.radosgw.keyring", > "INFO:__main__:Writing out command to execute", > "INFO:__main__:Setting permission for /var/lib/glance", > "INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring", > "++ cat /run_command", > "+ CMD='/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf'", > "+ ARGS=", > "+ [[ ! -n '' ]]", > "+ . kolla_extend_start", > "++ [[ ! -d /var/log/kolla/glance ]]", > "++ mkdir -p /var/log/kolla/glance", > "+++ stat -c %a /var/log/kolla/glance", > "++ [[ 2755 != \\7\\5\\5 ]]", > "++ chmod 755 /var/log/kolla/glance", > "++ . /usr/local/bin/kolla_glance_extend_start", > "+++ [[ -n 0 ]]", > "+++ glance-manage db_sync", > "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1352: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade", > " expire_on_commit=expire_on_commit, _conf=conf)", > "INFO [alembic.runtime.migration] Context impl MySQLImpl.", > "INFO [alembic.runtime.migration] Will assume non-transactional DDL.", > "INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial", > "INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table", > "INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server", > "INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images", > "INFO [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01", > "INFO [alembic.runtime.migration] Running upgrade pike_expand01 -> queens_expand01", > "INFO [alembic.runtime.migration] Running upgrade queens_expand01 -> rocky_expand01, add os_hidden column to images table", > "INFO [alembic.runtime.migration] Running upgrade rocky_expand01 -> rocky_expand02, add os_hash_algo and os_hash_value columns to images table", > "INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images", > "INFO [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables", > "INFO [alembic.runtime.migration] Running upgrade pike_contract01 -> queens_contract01", > "INFO [alembic.runtime.migration] Running upgrade queens_contract01 -> rocky_contract01", > "INFO [alembic.runtime.migration] Running upgrade rocky_contract01 -> rocky_contract02", > "+++ glance-manage db_load_metadefs", > "+++ exit 0", > "stdout: '/swift_ringbuilder/etc/swift/account.ring.gz' -> '/etc/swift/account.ring.gz'", > "'/swift_ringbuilder/etc/swift/container.ring.gz' -> '/etc/swift/container.ring.gz'", > "'/swift_ringbuilder/etc/swift/object.ring.gz' -> '/etc/swift/object.ring.gz'", > "'/swift_ringbuilder/etc/swift/account.builder' -> '/etc/swift/account.builder'", > "'/swift_ringbuilder/etc/swift/container.builder' -> '/etc/swift/container.builder'", > "'/swift_ringbuilder/etc/swift/object.builder' -> '/etc/swift/object.builder'", > "'/swift_ringbuilder/etc/swift/backups' -> '/etc/swift/backups'", > "'/swift_ringbuilder/etc/swift/backups/1538483739.object.builder' -> '/etc/swift/backups/1538483739.object.builder'", > "'/swift_ringbuilder/etc/swift/backups/1538483740.account.builder' -> '/etc/swift/backups/1538483740.account.builder'", > "'/swift_ringbuilder/etc/swift/backups/1538483740.container.builder' -> '/etc/swift/backups/1538483740.container.builder'", > "'/swift_ringbuilder/etc/swift/backups/1538483742.account.builder' -> '/etc/swift/backups/1538483742.account.builder'", > "'/swift_ringbuilder/etc/swift/backups/1538483742.account.ring.gz' -> '/etc/swift/backups/1538483742.account.ring.gz'", > "'/swift_ringbuilder/etc/swift/backups/1538483742.object.builder' -> '/etc/swift/backups/1538483742.object.builder'", > "'/swift_ringbuilder/etc/swift/backups/1538483742.object.ring.gz' -> '/etc/swift/backups/1538483742.object.ring.gz'", > "'/swift_ringbuilder/etc/swift/backups/1538483743.container.builder' -> '/etc/swift/backups/1538483743.container.builder'", > "'/swift_ringbuilder/etc/swift/backups/1538483743.container.ring.gz' -> '/etc/swift/backups/1538483743.container.ring.gz'", > "stderr: INFO [alembic.runtime.migration] Context impl MySQLImpl.", > "INFO [alembic.runtime.migration] Running upgrade -> 001, Icehouse release", > "INFO [alembic.runtime.migration] Running upgrade 001 -> 002, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 002 -> 003, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 003 -> 004, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 004 -> 005, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 005 -> 006, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 006 -> 007, convert clusters.status_description to LongText", > "INFO [alembic.runtime.migration] Running upgrade 007 -> 008, add security_groups field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 008 -> 009, add rollback info to cluster", > "INFO [alembic.runtime.migration] Running upgrade 009 -> 010, add auto_security_groups flag to node group", > "INFO [alembic.runtime.migration] Running upgrade 010 -> 011, add Sahara settings info to cluster", > "INFO [alembic.runtime.migration] Running upgrade 011 -> 012, add availability_zone field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 012 -> 013, add volumes_availability_zone field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 013 -> 014, add_volume_type", > "INFO [alembic.runtime.migration] Running upgrade 014 -> 015, add_events_objects", > "INFO [alembic.runtime.migration] Running upgrade 015 -> 016, Add is_proxy_gateway", > "INFO [alembic.runtime.migration] Running upgrade 016 -> 017, drop progress in JobExecution", > "INFO [alembic.runtime.migration] Running upgrade 017 -> 018, add volume_local_to_instance flag", > "INFO [alembic.runtime.migration] Running upgrade 018 -> 019, Add is_default field for cluster and node_group templates", > "INFO [alembic.runtime.migration] Running upgrade 019 -> 020, remove redandunt progress ops", > "INFO [alembic.runtime.migration] Running upgrade 020 -> 021, Add data_source_urls to job_executions to support placeholders", > "INFO [alembic.runtime.migration] Running upgrade 021 -> 022, add_job_interface", > "INFO [alembic.runtime.migration] Running upgrade 022 -> 023, add_use_autoconfig", > "INFO [alembic.runtime.migration] Running upgrade 023 -> 024, manila_shares", > "INFO [alembic.runtime.migration] Running upgrade 024 -> 025, Increase internal_ip and management_ip column size to work with IPv6", > "INFO [alembic.runtime.migration] Running upgrade 025 -> 026, add is_public and is_protected flags", > "INFO [alembic.runtime.migration] Running upgrade 026 -> 027, Rename oozie_job_id", > "INFO [alembic.runtime.migration] Running upgrade 027 -> 028, add_storage_devices_number", > "INFO [alembic.runtime.migration] Running upgrade 028 -> 029, set is_protected on is_default", > "INFO [alembic.runtime.migration] Running upgrade 029 -> 030, health-check", > "INFO [alembic.runtime.migration] Running upgrade 030 -> 031, added_plugins_table", > "INFO [alembic.runtime.migration] Running upgrade 031 -> 032, 032_add_domain_name", > "INFO [alembic.runtime.migration] Running upgrade 032 -> 033, 033_add anti_affinity_ratio field to cluster", > "INFO [alembic.runtime.migration] Running upgrade 033 -> 034, Add boot_from_volumes field for node_groups and related classes", > "stdout: 4121fe414f6d8c58d33d8b0c3516b076a94903a772e89b1866b330fc62655827", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-keystone_wsgi_admin.conf to /etc/httpd/conf.d/10-keystone_wsgi_admin.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-keystone_wsgi_main.conf to /etc/httpd/conf.d/10-keystone_wsgi_main.conf", > "INFO:__main__:Deleting /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/ssl.conf to /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/access_compat.load to /etc/httpd/conf.modules.d/access_compat.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/actions.load to /etc/httpd/conf.modules.d/actions.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.conf to /etc/httpd/conf.modules.d/alias.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.load to /etc/httpd/conf.modules.d/alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_basic.load to /etc/httpd/conf.modules.d/auth_basic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_digest.load to /etc/httpd/conf.modules.d/auth_digest.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_anon.load to /etc/httpd/conf.modules.d/authn_anon.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_core.load to /etc/httpd/conf.modules.d/authn_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_dbm.load to /etc/httpd/conf.modules.d/authn_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_file.load to /etc/httpd/conf.modules.d/authn_file.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_core.load to /etc/httpd/conf.modules.d/authz_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_dbm.load to /etc/httpd/conf.modules.d/authz_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_groupfile.load to /etc/httpd/conf.modules.d/authz_groupfile.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_host.load to /etc/httpd/conf.modules.d/authz_host.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_owner.load to /etc/httpd/conf.modules.d/authz_owner.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_user.load to /etc/httpd/conf.modules.d/authz_user.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.conf to /etc/httpd/conf.modules.d/autoindex.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.load to /etc/httpd/conf.modules.d/autoindex.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cache.load to /etc/httpd/conf.modules.d/cache.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cgi.load to /etc/httpd/conf.modules.d/cgi.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav.load to /etc/httpd/conf.modules.d/dav.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.conf to /etc/httpd/conf.modules.d/dav_fs.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.load to /etc/httpd/conf.modules.d/dav_fs.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.conf to /etc/httpd/conf.modules.d/deflate.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.load to /etc/httpd/conf.modules.d/deflate.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.conf to /etc/httpd/conf.modules.d/dir.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.load to /etc/httpd/conf.modules.d/dir.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/env.load to /etc/httpd/conf.modules.d/env.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/expires.load to /etc/httpd/conf.modules.d/expires.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ext_filter.load to /etc/httpd/conf.modules.d/ext_filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/filter.load to /etc/httpd/conf.modules.d/filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/include.load to /etc/httpd/conf.modules.d/include.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/log_config.load to /etc/httpd/conf.modules.d/log_config.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/logio.load to /etc/httpd/conf.modules.d/logio.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.conf to /etc/httpd/conf.modules.d/mime.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.load to /etc/httpd/conf.modules.d/mime.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.conf to /etc/httpd/conf.modules.d/mime_magic.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.load to /etc/httpd/conf.modules.d/mime_magic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.conf to /etc/httpd/conf.modules.d/negotiation.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.load to /etc/httpd/conf.modules.d/negotiation.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.conf to /etc/httpd/conf.modules.d/prefork.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.load to /etc/httpd/conf.modules.d/prefork.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/rewrite.load to /etc/httpd/conf.modules.d/rewrite.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.conf to /etc/httpd/conf.modules.d/setenvif.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.load to /etc/httpd/conf.modules.d/setenvif.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/socache_shmcb.load to /etc/httpd/conf.modules.d/socache_shmcb.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/speling.load to /etc/httpd/conf.modules.d/speling.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ssl.load to /etc/httpd/conf.modules.d/ssl.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.conf to /etc/httpd/conf.modules.d/status.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.load to /etc/httpd/conf.modules.d/status.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/substitute.load to /etc/httpd/conf.modules.d/substitute.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/suexec.load to /etc/httpd/conf.modules.d/suexec.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/systemd.load to /etc/httpd/conf.modules.d/systemd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/unixd.load to /etc/httpd/conf.modules.d/unixd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/usertrack.load to /etc/httpd/conf.modules.d/usertrack.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/version.load to /etc/httpd/conf.modules.d/version.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/vhost_alias.load to /etc/httpd/conf.modules.d/vhost_alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.conf to /etc/httpd/conf.modules.d/wsgi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.load to /etc/httpd/conf.modules.d/wsgi.load", > "INFO:__main__:Deleting /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/httpd.conf to /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/ports.conf to /etc/httpd/conf/ports.conf", > "INFO:__main__:Creating directory /etc/keystone/credential-keys", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/credential-keys/0 to /etc/keystone/credential-keys/0", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/credential-keys/1 to /etc/keystone/credential-keys/1", > "INFO:__main__:Creating directory /etc/keystone/fernet-keys", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/fernet-keys/0 to /etc/keystone/fernet-keys/0", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/fernet-keys/1 to /etc/keystone/fernet-keys/1", > "INFO:__main__:Deleting /etc/keystone/keystone.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/keystone.conf to /etc/keystone/keystone.conf", > "INFO:__main__:Creating directory /etc/systemd/system/httpd.service.d", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/systemd/system/httpd.service.d/httpd.conf to /etc/systemd/system/httpd.service.d/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/spool/cron/keystone to /var/spool/cron/keystone", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/keystone/keystone-admin to /var/www/cgi-bin/keystone/keystone-admin", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/keystone/keystone-public to /var/www/cgi-bin/keystone/keystone-public", > "+ CMD='/usr/sbin/httpd -DFOREGROUND'", > "++ [[ rhel =~ debian|ubuntu ]]", > "++ rm -rf /var/run/httpd/htcacheclean /run/httpd/htcacheclean '/tmp/httpd*'", > "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", > "++ [[ ! -d /var/log/kolla/keystone ]]", > "++ mkdir -p /var/log/kolla/keystone", > "+++ stat -c %U:%G /var/log/kolla/keystone", > "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", > "++ chown keystone:kolla /var/log/kolla/keystone", > "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", > "++ touch /var/log/kolla/keystone/keystone.log", > "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", > "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", > "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", > "+++ stat -c %a /var/log/kolla/keystone", > "++ chmod 755 /var/log/kolla/keystone", > "++ EXTRA_KEYSTONE_MANAGE_ARGS=", > "++ [[ -n '' ]]", > "++ [[ -n 0 ]]", > "++ sudo -H -u keystone keystone-manage db_sync", > "++ exit 0", > "stdout: 0d5ebc2c97b00371d23e65d4fe9d04056a1732eaede7665c7bcd0f657d61997e", > "stdout: Running upgrade for neutron ...", > "OK", > "Running upgrade for networking-bgpvpn ...", > "Running upgrade for networking-l2gw ...", > "Running upgrade for networking-odl ...", > "Running upgrade for neutron-fwaas ...", > "Running upgrade for neutron-lbaas ...", > "Running upgrade for vmware-nsx ...", > "INFO [alembic.runtime.migration] Running upgrade -> kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225", > "INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151", > "INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf", > "INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee", > "INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f", > "INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773", > "INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592", > "INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7", > "INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79", > "INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051", > "INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136", > "INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59", > "INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d", > "INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a", > "INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25", > "INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee", > "INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9", > "INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4", > "INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664", > "INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5", > "INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f", > "INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821", > "INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4", > "INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81", > "INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6", > "INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532", > "INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f", > "INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a", > "INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b", > "INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99", > "INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada", > "INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016", > "INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3", > "INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d", > "INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d", > "INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297", > "INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c", > "INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39", > "INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b", > "INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050", > "INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9", > "INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada", > "INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc", > "INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53", > "INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70", > "INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502", > "INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee", > "INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048", > "INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4", > "INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37", > "INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa", > "INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf", > "INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4", > "INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e", > "INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90", > "INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4", > "INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426", > "INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524", > "INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc", > "INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d", > "INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70", > "INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c", > "INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c", > "INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da", > "INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192", > "INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9", > "INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6", > "INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f", > "INFO [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee", > "INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c", > "INFO [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding", > "INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a", > "INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad", > "INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab", > "INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0", > "INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62", > "INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353", > "INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586", > "INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d", > "INFO [alembic.runtime.migration] Running upgrade -> start_networking_bgpvpn, start networking_bgpvpn chain", > "Revision ID: start_networking_bgpvpn", > "Revises: None", > "Create Date: 2015-10-01 18:04:17.265514", > "INFO [alembic.runtime.migration] Running upgrade start_networking_bgpvpn -> 17d9fd4fddee, expand initial", > "Revision ID: 17d9fd4fddee", > "Revises: start_networking_bgpvpn", > "Create Date: 2015-10-01 17:35:11.000000", > "INFO [alembic.runtime.migration] Running upgrade 17d9fd4fddee -> 3600132c6147, Add router association table", > "INFO [alembic.runtime.migration] Running upgrade 3600132c6147 -> 0ab4049986b8, add indexes to tenant_id", > "INFO [alembic.runtime.migration] Running upgrade 0ab4049986b8 -> 9a6664f3b8d4, Add tables for port associations", > "INFO [alembic.runtime.migration] Running upgrade 9a6664f3b8d4 -> 39411aacf9b8, add vni to bgpvpn table", > "INFO [alembic.runtime.migration] Running upgrade 39411aacf9b8 -> 4610803bdf0d, Add 'extra-routes' to router association table", > "INFO [alembic.runtime.migration] Running upgrade 4610803bdf0d -> 666c706fea3b, Add local_pref to bgpvpns table", > "INFO [alembic.runtime.migration] Running upgrade 666c706fea3b -> 7a9482036ecd, Add standard attributes", > "INFO [alembic.runtime.migration] Running upgrade start_networking_bgpvpn -> 180baa4183e0, contract initial", > "Revision ID: 180baa4183e0", > "INFO [alembic.runtime.migration] Running upgrade 180baa4183e0 -> 23ce05e0a19f, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade 23ce05e0a19f -> 9d7f1ae5fa56, Add standard FK and constraints, and defs for existing objects", > "INFO [alembic.runtime.migration] Running upgrade -> start_networking_l2gw, start networking-l2gw chain", > "INFO [alembic.runtime.migration] Running upgrade start_networking_l2gw -> 54c9c8fe22bf, DB_Models_for_OVSDB_Hardware_VTEP_Schema", > "INFO [alembic.runtime.migration] Running upgrade 54c9c8fe22bf -> 42438454c556, l2gateway_models", > "INFO [alembic.runtime.migration] Running upgrade 42438454c556 -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 60019185aa99, Initial no-op Liberty expand rule.", > "INFO [alembic.runtime.migration] Running upgrade 60019185aa99 -> 49ce408ac349, add indexes to tenant_id", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 79919185aa99, Initial no-op Liberty contract rule.", > "INFO [alembic.runtime.migration] Running upgrade 79919185aa99 -> 2f533f7705dd, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade -> b89a299e19f9, Initial odl db, branchpoint", > "INFO [alembic.runtime.migration] Running upgrade b89a299e19f9 -> 247501328046, Start of odl expand branch", > "INFO [alembic.runtime.migration] Running upgrade 247501328046 -> 37e242787ae5, OpenDaylight Neutron mechanism driver refactor", > "INFO [alembic.runtime.migration] Running upgrade 37e242787ae5 -> 703dbf02afde, Add journal maintenance table", > "INFO [alembic.runtime.migration] Running upgrade 703dbf02afde -> 3d560427d776, add sequence number to journal", > "INFO [alembic.runtime.migration] Running upgrade b89a299e19f9 -> 383acb0d38a0, Start of odl contract branch", > "INFO [alembic.runtime.migration] Running upgrade 383acb0d38a0 -> fa0c536252a5, update opendayligut journal", > "INFO [alembic.runtime.migration] Running upgrade fa0c536252a5 -> eccd865b7d3a, drop opendaylight_maintenance table", > "INFO [alembic.runtime.migration] Running upgrade eccd865b7d3a -> 7cbef5a56298, Drop created_at column", > "INFO [alembic.runtime.migration] Running upgrade 3d560427d776 -> 43af357fd638, Added version_id for optimistic locking", > "INFO [alembic.runtime.migration] Running upgrade 43af357fd638 -> 0472f56ff2fb, Add journal dependencies table", > "INFO [alembic.runtime.migration] Running upgrade 0472f56ff2fb -> 6f7dfb241354, create opendaylight_preiodic_task table", > "INFO [alembic.runtime.migration] Running upgrade -> start_neutron_fwaas, start neutron-fwaas chain", > "INFO [alembic.runtime.migration] Running upgrade start_neutron_fwaas -> 4202e3047e47, add_index_tenant_id", > "INFO [alembic.runtime.migration] Running upgrade 4202e3047e47 -> 540142f314f4, FWaaS router insertion", > "INFO [alembic.runtime.migration] Running upgrade 540142f314f4 -> 796c68dffbb, cisco_csr_fwaas", > "INFO [alembic.runtime.migration] Running upgrade 796c68dffbb -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> c40fbb377ad, Initial Liberty no-op script.", > "INFO [alembic.runtime.migration] Running upgrade c40fbb377ad -> 4b47ea298795, add reject rule", > "INFO [alembic.runtime.migration] Running upgrade 4b47ea298795 -> d6a12e637e28, neutron-fwaas v2.0", > "INFO [alembic.runtime.migration] Running upgrade d6a12e637e28 -> 876782258a43, create_default_firewall_groups_table", > "INFO [alembic.runtime.migration] Running upgrade 876782258a43 -> f24e0d5e5bff, uniq_firewallgroupportassociation0port", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 67c8e8d61d5, Initial Liberty no-op script.", > "INFO [alembic.runtime.migration] Running upgrade 67c8e8d61d5 -> 458aa42b14b, fw_table_alter script to make <name> column case sensitive", > "INFO [alembic.runtime.migration] Running upgrade 458aa42b14b -> f83a0b2964d0, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade f83a0b2964d0 -> fd38cd995cc0, change shared attribute for firewall resource", > "INFO [alembic.runtime.migration] Running upgrade -> start_neutron_lbaas, start neutron-lbaas chain", > "INFO [alembic.runtime.migration] Running upgrade start_neutron_lbaas -> lbaasv2, lbaas version 2 api", > "INFO [alembic.runtime.migration] Running upgrade lbaasv2 -> 4deef6d81931, add provisioning and operating statuses", > "INFO [alembic.runtime.migration] Running upgrade 4deef6d81931 -> 4b6d8d5310b8, add_index_tenant_id", > "INFO [alembic.runtime.migration] Running upgrade 4b6d8d5310b8 -> 364f9b6064f0, agentv2", > "INFO [alembic.runtime.migration] Running upgrade 364f9b6064f0 -> lbaasv2_tls, lbaasv2 TLS", > "INFO [alembic.runtime.migration] Running upgrade lbaasv2_tls -> 4ba00375f715, edge_driver", > "INFO [alembic.runtime.migration] Running upgrade 4ba00375f715 -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 3345facd0452, Initial Liberty no-op expand script.", > "INFO [alembic.runtime.migration] Running upgrade 3345facd0452 -> 4a408dd491c2, Addition of Name column to lbaas_members and lbaas_healthmonitors table", > "INFO [alembic.runtime.migration] Running upgrade 4a408dd491c2 -> 3426acbc12de, Add flavor id", > "INFO [alembic.runtime.migration] Running upgrade 3426acbc12de -> 6aee0434f911, independent pools", > "INFO [alembic.runtime.migration] Running upgrade 6aee0434f911 -> 3543deab1547, add_l7_tables", > "INFO [alembic.runtime.migration] Running upgrade 3543deab1547 -> 62deca5010cd, Add tenant-id index for L7 tables", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 130ebfdef43, Initial Liberty no-op contract revision.", > "INFO [alembic.runtime.migration] Running upgrade 130ebfdef43 -> 4b4dc6d5d843, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade 4b4dc6d5d843 -> e6417a8b114d, Drop v1 tables", > "INFO [alembic.runtime.migration] Running upgrade 62deca5010cd -> 844352f9fe6f, Add healthmonitor max retries down", > "INFO [alembic.runtime.migration] Running upgrade -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 53a3254aa95e, Initial Liberty no-op expand script.", > "INFO [alembic.runtime.migration] Running upgrade 53a3254aa95e -> 28430956782d, nsxv3_security_groups", > "INFO [alembic.runtime.migration] Running upgrade 28430956782d -> 279b70ac3ae8, NSXv3 Add l2gwconnection table", > "INFO [alembic.runtime.migration] Running upgrade 279b70ac3ae8 -> 312211a5725f, nsxv_lbv2", > "INFO [alembic.runtime.migration] Running upgrade 312211a5725f -> 2af850eb3970, update nsxv tz binding type", > "INFO [alembic.runtime.migration] Running upgrade 2af850eb3970 -> 69fb78b33d41, NSXv add dns search domain to subnets", > "INFO [alembic.runtime.migration] Running upgrade 69fb78b33d41 -> 20483029f1ff, update nsx_v3 tz_network_bindings_binding_type", > "INFO [alembic.runtime.migration] Running upgrade 20483029f1ff -> 4c45bcadccf9, extend_secgroup_rule", > "INFO [alembic.runtime.migration] Running upgrade 4c45bcadccf9 -> 2c87aedb206f, nsxv_security_group_logging", > "INFO [alembic.runtime.migration] Running upgrade 2c87aedb206f -> 3e4dccfe6fb4, NSXv add dns search domain to subnets", > "INFO [alembic.runtime.migration] Running upgrade 3e4dccfe6fb4 -> 967462f585e1, add dvs_id column to neutron_nsx_network_mappings", > "INFO [alembic.runtime.migration] Running upgrade 967462f585e1 -> b7f41687cbad, nsxv3_qos_policy_mapping", > "INFO [alembic.runtime.migration] Running upgrade b7f41687cbad -> c288bb6a7252, NSXv add resource pool to the router bindings table", > "INFO [alembic.runtime.migration] Running upgrade c288bb6a7252 -> c644ec62c585, NSXv3 add nsx_service_bindings and nsx_dhcp_bindings tables", > "INFO [alembic.runtime.migration] Running upgrade c644ec62c585 -> 5e564e781d77, add nsx binding type", > "INFO [alembic.runtime.migration] Running upgrade 5e564e781d77 -> aede17d51d0f, add timestamp", > "INFO [alembic.runtime.migration] Running upgrade aede17d51d0f -> 7e46906f8997, lbaas foreignkeys", > "INFO [alembic.runtime.migration] Running upgrade 7e46906f8997 -> 86a55205337c, NSXv add availability zone to the router bindings table instead of", > "the resource pool column", > "INFO [alembic.runtime.migration] Running upgrade 86a55205337c -> 633514d94b93, Add support for TaaS", > "INFO [alembic.runtime.migration] Running upgrade 633514d94b93 -> 1b4eaffe4f31, NSX Adds a 'provider' attribute to security-group", > "INFO [alembic.runtime.migration] Running upgrade 1b4eaffe4f31 -> 6e6da8296c0e, Add support for IPAM in NSXv", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 393bf843b96, Initial Liberty no-op contract script.", > "INFO [alembic.runtime.migration] Running upgrade 393bf843b96 -> 3c88bdea3054, nsxv_vdr_dhcp_binding.py", > "INFO [alembic.runtime.migration] Running upgrade 3c88bdea3054 -> 5ed1ffbc0d2a, nsxv_security_group_logging", > "INFO [alembic.runtime.migration] Running upgrade 5ed1ffbc0d2a -> 081af0e396d7, nsxv3_secgroup_local_ip_prefix", > "INFO [alembic.runtime.migration] Running upgrade 081af0e396d7 -> dbe29d208ac6, NSXv add DHCP MTU to subnets", > "INFO [alembic.runtime.migration] Running upgrade dbe29d208ac6 -> d49ac91b560e, Support shared pools with NSXv LBaaSv2 driver", > "INFO [alembic.runtime.migration] Running upgrade d49ac91b560e -> 5c8f451290b7, nsxv_subnet_ipam rename to nsx_subnet_ipam", > "INFO [alembic.runtime.migration] Running upgrade 5c8f451290b7 -> 14a89ddf96e2, NSX Adds a 'availability_zone' attribute to internal-networks table", > "INFO [alembic.runtime.migration] Running upgrade 14a89ddf96e2 -> 8c0a81a07691, Update the primary key constraint of nsx_subnet_ipam", > "INFO [alembic.runtime.migration] Running upgrade 8c0a81a07691 -> 84ceffa27115, remove the foreign key constrain from nsxv3_qos_policy_mapping", > "INFO [alembic.runtime.migration] Running upgrade 84ceffa27115 -> a1be06050b41, update nsx binding types", > "INFO [alembic.runtime.migration] Running upgrade a1be06050b41 -> 717f7f63a219, nsxv3_lbaas_l7policy", > "INFO [alembic.runtime.migration] Running upgrade 6e6da8296c0e -> 7b5ec3caa9a4, Fix the availability zones default value in the router bindings table", > "INFO [alembic.runtime.migration] Running upgrade 7b5ec3caa9a4 -> e816d4fe9d4f, NSX Adds a 'policy' attribute to security-group", > "INFO [alembic.runtime.migration] Running upgrade e816d4fe9d4f -> dd9fe5a3a526, NSX Adds certificate table for client certificate management", > "INFO [alembic.runtime.migration] Running upgrade dd9fe5a3a526 -> 01a33f93f5fd, nsxv_lbv2_l7policy", > "INFO [alembic.runtime.migration] Running upgrade 01a33f93f5fd -> e4c503f4133f, Port vnic_type support", > "INFO [alembic.runtime.migration] Running upgrade e4c503f4133f -> 7c4704ad37df, Fix NSX Lbaas L7 policy table creation", > "INFO [alembic.runtime.migration] Running upgrade 7c4704ad37df -> 8699700cd95c, nsxv_bgp_speaker_mapping", > "INFO [alembic.runtime.migration] Running upgrade 8699700cd95c -> 53eb497903a4, Drop VDR DHCP bindings table", > "INFO [alembic.runtime.migration] Running upgrade 53eb497903a4 -> ea7a72ab9643", > "INFO [alembic.runtime.migration] Running upgrade ea7a72ab9643 -> 9799427fc0e1, nsx map project to plugin", > "INFO [alembic.runtime.migration] Running upgrade 9799427fc0e1 -> 0dbeda408e41, nsxv3_vpn_mapping", > "stdout: 8e5f4d60cdc84a03efd349c80b90f9137ee5c3c797365e413ac517e84c67b474", > "stdout: d12de2e2e66fb97f3ed9cc929e8cb01ce80cfb80f2bf622320bff423c447b03e", > "stdout: 703f727e69f2503cddfa9df1a6fadc41d91a2cd6a0183067ad95335e8b343f01", > "stdout: (cellv2) Creating default cell_v2 cell", > "stdout: a07c25e1c96cb60c58ab904f5f4434352d0215307961a531162be1266ef3d951", > "stdout: 9f2f685cc090ef7583e05dbd8bf62176b06c4c5f2bdd8e591d477108096afd45", > "stderr: /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')", > " result = self._query(query)", > "/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')", > "stdout: b74bec4962e8f61d3dd4e92ae9041a0122a9ccc46945ed7978236a24e51a8658" > ] >} >2018-10-02 08:52:32,406 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-libvirt ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-libvirt", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "1aee11278cfc: Pulling fs layer", > "1aee11278cfc: Download complete", > "1aee11278cfc: Pull complete", > "Digest: sha256:59dc3e2a67038c6ed26badce6efa9c8e883b9901a9610478728edfe0cc26cc8d", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", > "", > "stderr: ", > "stdout: \u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend\u001b[0m", > "\u001b[mNotice: Compiled catalog for compute-0.localdomain in environment production in 1.58 seconds\u001b[0m", > "\u001b[0;32mInfo: Applying configuration version '1538484698'\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]/Vs_bridge[br-ex]/ensure: created\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]/Vs_bridge[br-isolated]/external_ids: external_ids changed 'PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5),PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)' to 'bridge-id=br-isolated'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]\u001b[0m", > "\u001b[0;32mInfo: Creating state file /var/lib/puppet/state/state.yaml\u001b[0m", > "\u001b[mNotice: Applied catalog in 0.25 seconds\u001b[0m", > "stderr: \u001b[1;33mWarning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found\u001b[0m", > "\u001b[1;33mWarning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found\u001b[0m", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "\u001b[1;33mWarning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)\u001b[0m", > "\u001b[1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')\u001b[0m", > "stdout: INFO:nova_statedir:Applying nova statedir ownership", > "INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436", > "INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/", > "INFO:nova_statedir:Changing ownership of /var/lib/nova from 0:0 to 42436:42436", > "INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/", > "INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 0:0 to 42436:42436", > "INFO:nova_statedir:Nova statedir ownership complete", > "stdout: f58ca776771e042c73ee88a874e9716ab21c3bbbb79b3b65e209af6e2bddf469", > "stdout: 4a45ee58fe64c82b7f2826d6b2c0c649cd3b6bf0cd6d9c29bf28fcdfc9704e10", > "stdout: 4fd54a0da9412c569d09d6f01f2fe51bca6d05895ca9e4638252491c866a4812" > ] >} >2018-10-02 08:52:32,434 p=1004 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks3.json exists] ******** >2018-10-02 08:52:32,435 p=1004 u=mistral | Tuesday 02 October 2018 08:52:32 -0400 (0:00:00.179) 0:23:45.168 ******* >2018-10-02 08:52:32,688 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1538483621.6782339, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "f95a667e13f830f3654131f0f75b234e7583eada", "ctime": 1538483621.6822338, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 58720550, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0600", "mtime": 1538483621.5152338, "nlink": 1, "path": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 444, "uid": 0, "version": "168039978", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-10-02 08:52:32,697 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:52:32,727 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:52:32,756 p=1004 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 3] ******************** >2018-10-02 08:52:32,756 p=1004 u=mistral | Tuesday 02 October 2018 08:52:32 -0400 (0:00:00.321) 0:23:45.489 ******* >2018-10-02 08:52:32,821 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:52:32,842 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:19,261 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:19,288 p=1004 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (bootstrap tasks) for step 3] *** >2018-10-02 08:55:19,288 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:02:46.531) 0:26:32.021 ******* >2018-10-02 08:55:19,350 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:55:19,352 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-10-02 12:52:33,014 INFO: 91923 -- Running docker-puppet", > "2018-10-02 12:52:33,014 INFO: 91923 -- Service compilation completed.", > "2018-10-02 12:52:33,015 INFO: 91923 -- Starting multiprocess configuration steps. Using 8 processes.", > "2018-10-02 12:52:33,030 INFO: 91930 -- Starting configuration of keystone_init_tasks using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 12:52:33,032 INFO: 91930 -- Removing container: docker-puppet-keystone_init_tasks", > "2018-10-02 12:52:33,085 INFO: 91930 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 12:55:19,142 INFO: 91930 -- Removing container: docker-puppet-keystone_init_tasks", > "2018-10-02 12:55:19,199 INFO: 91930 -- Finished processing puppet configs for keystone_init_tasks" > ] >} >2018-10-02 08:55:19,364 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:55:19,371 p=1004 u=mistral | PLAY [External deployment step 4] ********************************************** >2018-10-02 08:55:19,389 p=1004 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-10-02 08:55:19,390 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.101) 0:26:32.123 ******* >2018-10-02 08:55:19,412 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,428 p=1004 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-10-02 08:55:19,429 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.038) 0:26:32.162 ******* >2018-10-02 08:55:19,464 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,469 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,476 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,494 p=1004 u=mistral | TASK [generate inventory] ****************************************************** >2018-10-02 08:55:19,494 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.065) 0:26:32.227 ******* >2018-10-02 08:55:19,514 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,527 p=1004 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-10-02 08:55:19,527 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.033) 0:26:32.261 ******* >2018-10-02 08:55:19,552 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,564 p=1004 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-10-02 08:55:19,565 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.037) 0:26:32.298 ******* >2018-10-02 08:55:19,589 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,604 p=1004 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-10-02 08:55:19,604 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.039) 0:26:32.338 ******* >2018-10-02 08:55:19,626 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,640 p=1004 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-10-02 08:55:19,640 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.036) 0:26:32.374 ******* >2018-10-02 08:55:19,662 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,676 p=1004 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-10-02 08:55:19,676 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.035) 0:26:32.409 ******* >2018-10-02 08:55:19,698 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,713 p=1004 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-10-02 08:55:19,714 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.037) 0:26:32.447 ******* >2018-10-02 08:55:19,735 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,750 p=1004 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-10-02 08:55:19,750 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.036) 0:26:32.483 ******* >2018-10-02 08:55:19,773 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,786 p=1004 u=mistral | TASK [set ceph-ansible params from Heat] *************************************** >2018-10-02 08:55:19,786 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.036) 0:26:32.520 ******* >2018-10-02 08:55:19,806 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,818 p=1004 u=mistral | TASK [set ceph-ansible playbooks] ********************************************** >2018-10-02 08:55:19,818 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.031) 0:26:32.552 ******* >2018-10-02 08:55:19,838 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,853 p=1004 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-10-02 08:55:19,853 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.034) 0:26:32.586 ******* >2018-10-02 08:55:19,873 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,887 p=1004 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-10-02 08:55:19,887 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.034) 0:26:32.621 ******* >2018-10-02 08:55:19,918 p=1004 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,933 p=1004 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-10-02 08:55:19,933 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.045) 0:26:32.666 ******* >2018-10-02 08:55:19,961 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:19,973 p=1004 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-10-02 08:55:19,973 p=1004 u=mistral | Tuesday 02 October 2018 08:55:19 -0400 (0:00:00.039) 0:26:32.706 ******* >2018-10-02 08:55:19,994 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,009 p=1004 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-10-02 08:55:20,009 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.036) 0:26:32.743 ******* >2018-10-02 08:55:20,028 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,043 p=1004 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-10-02 08:55:20,043 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.033) 0:26:32.776 ******* >2018-10-02 08:55:20,064 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,084 p=1004 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 08:55:20,084 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.041) 0:26:32.817 ******* >2018-10-02 08:55:20,106 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,122 p=1004 u=mistral | TASK [Create temp file for prepare parameter] ********************************** >2018-10-02 08:55:20,123 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.038) 0:26:32.856 ******* >2018-10-02 08:55:20,145 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,158 p=1004 u=mistral | TASK [Create temp file for role data] ****************************************** >2018-10-02 08:55:20,159 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.035) 0:26:32.892 ******* >2018-10-02 08:55:20,181 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,195 p=1004 u=mistral | TASK [Write ContainerImagePrepare parameter file] ****************************** >2018-10-02 08:55:20,196 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.036) 0:26:32.929 ******* >2018-10-02 08:55:20,221 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,235 p=1004 u=mistral | TASK [Write role data file] **************************************************** >2018-10-02 08:55:20,235 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.039) 0:26:32.968 ******* >2018-10-02 08:55:20,263 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,276 p=1004 u=mistral | TASK [Run tripleo-container-image-prepare] ************************************* >2018-10-02 08:55:20,277 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.041) 0:26:33.010 ******* >2018-10-02 08:55:20,298 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,311 p=1004 u=mistral | TASK [Delete param file] ******************************************************* >2018-10-02 08:55:20,311 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.034) 0:26:33.044 ******* >2018-10-02 08:55:20,339 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,353 p=1004 u=mistral | TASK [Delete role file] ******************************************************** >2018-10-02 08:55:20,353 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.042) 0:26:33.087 ******* >2018-10-02 08:55:20,375 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,390 p=1004 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-10-02 08:55:20,390 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.036) 0:26:33.123 ******* >2018-10-02 08:55:20,412 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,426 p=1004 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-10-02 08:55:20,427 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.036) 0:26:33.160 ******* >2018-10-02 08:55:20,448 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,463 p=1004 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-10-02 08:55:20,463 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.036) 0:26:33.196 ******* >2018-10-02 08:55:20,484 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,497 p=1004 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-10-02 08:55:20,497 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.034) 0:26:33.231 ******* >2018-10-02 08:55:20,519 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,525 p=1004 u=mistral | PLAY [Overcloud deploy step tasks for 4] *************************************** >2018-10-02 08:55:20,533 p=1004 u=mistral | PLAY [Overcloud common deploy step tasks 4] ************************************ >2018-10-02 08:55:20,564 p=1004 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-10-02 08:55:20,565 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.067) 0:26:33.298 ******* >2018-10-02 08:55:20,596 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,623 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,640 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,667 p=1004 u=mistral | TASK [Delete existing /var/lib/tripleo-config/check-mode directory for check mode] *** >2018-10-02 08:55:20,668 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.102) 0:26:33.401 ******* >2018-10-02 08:55:20,699 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,726 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,740 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,764 p=1004 u=mistral | TASK [Create /var/lib/tripleo-config/check-mode directory for check mode] ****** >2018-10-02 08:55:20,764 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.096) 0:26:33.498 ******* >2018-10-02 08:55:20,796 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,822 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,834 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,856 p=1004 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-10-02 08:55:20,857 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.092) 0:26:33.590 ******* >2018-10-02 08:55:20,885 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,909 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,924 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:20,948 p=1004 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 08:55:20,948 p=1004 u=mistral | Tuesday 02 October 2018 08:55:20 -0400 (0:00:00.091) 0:26:33.681 ******* >2018-10-02 08:55:20,984 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,057 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,071 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,099 p=1004 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 08:55:21,099 p=1004 u=mistral | Tuesday 02 October 2018 08:55:21 -0400 (0:00:00.150) 0:26:33.832 ******* >2018-10-02 08:55:21,133 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:55:21,163 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:55:21,176 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:55:21,202 p=1004 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-10-02 08:55:21,202 p=1004 u=mistral | Tuesday 02 October 2018 08:55:21 -0400 (0:00:00.103) 0:26:33.935 ******* >2018-10-02 08:55:21,234 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,262 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,276 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,302 p=1004 u=mistral | TASK [Delete existing /var/lib/docker-puppet/check-mode for check mode] ******** >2018-10-02 08:55:21,303 p=1004 u=mistral | Tuesday 02 October 2018 08:55:21 -0400 (0:00:00.100) 0:26:34.036 ******* >2018-10-02 08:55:21,335 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,364 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,385 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,415 p=1004 u=mistral | TASK [Create /var/lib/docker-puppet/check-mode for check mode] ***************** >2018-10-02 08:55:21,416 p=1004 u=mistral | Tuesday 02 October 2018 08:55:21 -0400 (0:00:00.112) 0:26:34.149 ******* >2018-10-02 08:55:21,450 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,480 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,493 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,521 p=1004 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-10-02 08:55:21,521 p=1004 u=mistral | Tuesday 02 October 2018 08:55:21 -0400 (0:00:00.105) 0:26:34.255 ******* >2018-10-02 08:55:21,555 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,584 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,597 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,625 p=1004 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 08:55:21,625 p=1004 u=mistral | Tuesday 02 October 2018 08:55:21 -0400 (0:00:00.103) 0:26:34.358 ******* >2018-10-02 08:55:21,658 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,686 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,702 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,730 p=1004 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 08:55:21,730 p=1004 u=mistral | Tuesday 02 October 2018 08:55:21 -0400 (0:00:00.104) 0:26:34.463 ******* >2018-10-02 08:55:21,765 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:55:21,792 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:55:21,805 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:55:21,829 p=1004 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-10-02 08:55:21,829 p=1004 u=mistral | Tuesday 02 October 2018 08:55:21 -0400 (0:00:00.099) 0:26:34.562 ******* >2018-10-02 08:55:21,859 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,884 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,896 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,919 p=1004 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-10-02 08:55:21,919 p=1004 u=mistral | Tuesday 02 October 2018 08:55:21 -0400 (0:00:00.090) 0:26:34.653 ******* >2018-10-02 08:55:21,949 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,973 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:21,984 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,006 p=1004 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-10-02 08:55:22,006 p=1004 u=mistral | Tuesday 02 October 2018 08:55:22 -0400 (0:00:00.086) 0:26:34.739 ******* >2018-10-02 08:55:22,066 p=1004 u=mistral | skipping: [controller-0] => (item=create_swift_secret.sh) => {"changed": false, "item": ["create_swift_secret.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,068 p=1004 u=mistral | skipping: [controller-0] => (item=docker_puppet_apply.sh) => {"changed": false, "item": ["docker_puppet_apply.sh", {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,070 p=1004 u=mistral | skipping: [controller-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": false, "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,071 p=1004 u=mistral | skipping: [controller-0] => (item=nova_api_discover_hosts.sh) => {"changed": false, "item": ["nova_api_discover_hosts.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,074 p=1004 u=mistral | skipping: [controller-0] => (item=nova_api_ensure_default_cell.sh) => {"changed": false, "item": ["nova_api_ensure_default_cell.sh", {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,081 p=1004 u=mistral | skipping: [controller-0] => (item=set_swift_keymaster_key_id.sh) => {"changed": false, "item": ["set_swift_keymaster_key_id.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,106 p=1004 u=mistral | skipping: [compute-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": false, "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,113 p=1004 u=mistral | skipping: [compute-0] => (item=nova_statedir_ownership.py) => {"changed": false, "item": ["nova_statedir_ownership.py", {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,138 p=1004 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-10-02 08:55:22,138 p=1004 u=mistral | Tuesday 02 October 2018 08:55:22 -0400 (0:00:00.132) 0:26:34.871 ******* >2018-10-02 08:55:22,170 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,171 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,198 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,199 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,200 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,201 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,202 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,202 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,203 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,209 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,213 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,220 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,223 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,226 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,226 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,228 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,238 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,240 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,247 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,252 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,255 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,280 p=1004 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-10-02 08:55:22,280 p=1004 u=mistral | Tuesday 02 October 2018 08:55:22 -0400 (0:00:00.142) 0:26:35.013 ******* >2018-10-02 08:55:22,313 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,340 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,353 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:22,378 p=1004 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-10-02 08:55:22,378 p=1004 u=mistral | Tuesday 02 October 2018 08:55:22 -0400 (0:00:00.097) 0:26:35.111 ******* >2018-10-02 08:55:22,410 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,440 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,451 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,477 p=1004 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-10-02 08:55:22,477 p=1004 u=mistral | Tuesday 02 October 2018 08:55:22 -0400 (0:00:00.098) 0:26:35.210 ******* >2018-10-02 08:55:22,540 p=1004 u=mistral | skipping: [ceph-0] => (item=step_1) => {"changed": false, "item": ["step_1", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,542 p=1004 u=mistral | skipping: [ceph-0] => (item=step_2) => {"changed": false, "item": ["step_2", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,544 p=1004 u=mistral | skipping: [ceph-0] => (item=step_3) => {"changed": false, "item": ["step_3", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,552 p=1004 u=mistral | skipping: [ceph-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,557 p=1004 u=mistral | skipping: [ceph-0] => (item=step_5) => {"changed": false, "item": ["step_5", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,567 p=1004 u=mistral | skipping: [controller-0] => (item=step_1) => {"changed": false, "item": ["step_1", {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=mG0FjSjrDN8mWwf9YJSsEJGuQ", "DB_ROOT_PASSWORD=5BSzxzKG9a"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=fbxKGjRmnA14UIbGdAmW"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,568 p=1004 u=mistral | skipping: [ceph-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,574 p=1004 u=mistral | skipping: [controller-0] => (item=step_2) => {"changed": false, "item": ["step_2", {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,580 p=1004 u=mistral | skipping: [controller-0] => (item=step_3) => {"changed": false, "item": ["step_3", {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "Q4TKZfrksKpvC1QXOQA8ciL7S"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "user": "root", "volumes": ["/srv/node:/srv/node"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,582 p=1004 u=mistral | skipping: [compute-0] => (item=step_1) => {"changed": false, "item": ["step_1", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,583 p=1004 u=mistral | skipping: [compute-0] => (item=step_2) => {"changed": false, "item": ["step_2", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,584 p=1004 u=mistral | skipping: [compute-0] => (item=step_3) => {"changed": false, "item": ["step_3", {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,592 p=1004 u=mistral | skipping: [controller-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,598 p=1004 u=mistral | skipping: [controller-0] => (item=step_5) => {"changed": false, "item": ["step_5", {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_api_online_migrations": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db online_data_migrations'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}, "nova_online_migrations": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db online_data_migrations'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,600 p=1004 u=mistral | skipping: [compute-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '4398e5b0-c63c-11e8-b95a-525400c8bd81' --base64 'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,602 p=1004 u=mistral | skipping: [compute-0] => (item=step_5) => {"changed": false, "item": ["step_5", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,603 p=1004 u=mistral | skipping: [controller-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,613 p=1004 u=mistral | skipping: [compute-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,638 p=1004 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-10-02 08:55:22,638 p=1004 u=mistral | Tuesday 02 October 2018 08:55:22 -0400 (0:00:00.160) 0:26:35.371 ******* >2018-10-02 08:55:22,669 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,695 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,707 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,732 p=1004 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-10-02 08:55:22,732 p=1004 u=mistral | Tuesday 02 October 2018 08:55:22 -0400 (0:00:00.094) 0:26:35.465 ******* >2018-10-02 08:55:22,794 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,840 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_compute.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_compute.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,846 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,851 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,857 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,871 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova-migration-target.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova-migration-target.json", {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,874 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_compute.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_compute.json", {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,876 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_libvirt.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_libvirt.json", {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,882 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_virtlogd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_virtlogd.json", {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,931 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,937 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_evaluator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_evaluator.json", {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,941 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_listener.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_listener.json", {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,948 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_notifier.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_notifier.json", {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,954 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_central.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_central.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,964 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_notification.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_notification.json", {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,969 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,976 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,982 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_backup.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_backup.json", {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,988 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_scheduler.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_scheduler.json", {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:22,994 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_volume.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_volume.json", {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,000 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/clustercheck.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/clustercheck.json", {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,005 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/glance_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/glance_api.json", {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,012 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/glance_api_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/glance_api_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,017 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,023 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_db_sync.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_db_sync.json", {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,030 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_metricd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_metricd.json", {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,035 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_statsd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_statsd.json", {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,041 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/haproxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/haproxy.json", {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,046 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,052 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cfn.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api_cfn.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,059 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,064 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_engine.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_engine.json", {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,073 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/horizon.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/horizon.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,077 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,084 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/keystone.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/keystone.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,090 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/keystone_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/keystone_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,095 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,102 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/mysql.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/mysql.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,108 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_api.json", {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,114 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_dhcp.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_dhcp.json", {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,119 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_l3_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_l3_agent.json", {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,125 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_metadata_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_metadata_agent.json", {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,132 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,138 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_server_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_server_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,144 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,149 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,154 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_conductor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_conductor.json", {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,161 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_consoleauth.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_consoleauth.json", {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,166 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_metadata.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_metadata.json", {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,172 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_placement.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_placement.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,177 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_scheduler.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_scheduler.json", {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,183 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_vnc_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_vnc_proxy.json", {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "root:nova", "path": "/etc/pki/tls/private/novnc_proxy.key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,190 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/panko_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/panko_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,196 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/rabbitmq.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/rabbitmq.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,203 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/redis.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/redis.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,209 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/redis_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/redis_tls_proxy.json", {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"optional": true, "owner": "root:root", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "root:root", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,215 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/sahara-api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/sahara-api.json", {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,221 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/sahara-engine.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/sahara-engine.json", {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,226 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_auditor.json", {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,232 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_reaper.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_reaper.json", {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,238 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_replicator.json", {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,245 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_server.json", {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,249 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_auditor.json", {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,254 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_replicator.json", {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,261 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_server.json", {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,267 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_updater.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_updater.json", {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,273 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_auditor.json", {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,278 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_expirer.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_expirer.json", {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,284 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_replicator.json", {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,289 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_server.json", {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,295 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_updater.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_updater.json", {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,301 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_proxy.json", {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,307 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,312 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_rsync.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_rsync.json", {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,348 p=1004 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-10-02 08:55:23,348 p=1004 u=mistral | Tuesday 02 October 2018 08:55:23 -0400 (0:00:00.615) 0:26:36.081 ******* >2018-10-02 08:55:23,361 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:55:23,388 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:55:23,414 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:55:23,442 p=1004 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-10-02 08:55:23,442 p=1004 u=mistral | Tuesday 02 October 2018 08:55:23 -0400 (0:00:00.094) 0:26:36.176 ******* >2018-10-02 08:55:23,507 p=1004 u=mistral | skipping: [controller-0] => (item=step_3) => {"changed": false, "item": ["step_3", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,508 p=1004 u=mistral | skipping: [controller-0] => (item=step_4) => {"changed": false, "item": ["step_4", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "config_volume": "cinder_init_tasks", "puppet_tags": "cinder_config,cinder_type,file,concat,file_line", "step_config": "include ::tripleo::profile::base::cinder::api", "volumes": ["/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro"]}]], "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,548 p=1004 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-10-02 08:55:23,548 p=1004 u=mistral | Tuesday 02 October 2018 08:55:23 -0400 (0:00:00.105) 0:26:36.281 ******* >2018-10-02 08:55:23,580 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,607 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,623 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,647 p=1004 u=mistral | TASK [Check for /etc/puppet/check-mode directory for check mode] *************** >2018-10-02 08:55:23,647 p=1004 u=mistral | Tuesday 02 October 2018 08:55:23 -0400 (0:00:00.099) 0:26:36.381 ******* >2018-10-02 08:55:23,678 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,703 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,714 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,738 p=1004 u=mistral | TASK [Create /etc/puppet/check-mode/hieradata directory for check mode] ******** >2018-10-02 08:55:23,738 p=1004 u=mistral | Tuesday 02 October 2018 08:55:23 -0400 (0:00:00.090) 0:26:36.472 ******* >2018-10-02 08:55:23,768 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,794 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,804 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:23,832 p=1004 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-10-02 08:55:23,833 p=1004 u=mistral | Tuesday 02 October 2018 08:55:23 -0400 (0:00:00.094) 0:26:36.566 ******* >2018-10-02 08:55:24,451 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538484923.88-33291073458320/source", "state": "file", "uid": 0} >2018-10-02 08:55:24,488 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538484923.92-275690622681389/source", "state": "file", "uid": 0} >2018-10-02 08:55:24,503 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538484923.95-31693646098453/source", "state": "file", "uid": 0} >2018-10-02 08:55:24,529 p=1004 u=mistral | TASK [Create puppet check-mode files if they don't exist for check mode] ******* >2018-10-02 08:55:24,529 p=1004 u=mistral | Tuesday 02 October 2018 08:55:24 -0400 (0:00:00.696) 0:26:37.263 ******* >2018-10-02 08:55:24,560 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:24,587 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:24,596 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:24,620 p=1004 u=mistral | TASK [Run puppet host configuration for step 4] ******************************** >2018-10-02 08:55:24,620 p=1004 u=mistral | Tuesday 02 October 2018 08:55:24 -0400 (0:00:00.091) 0:26:37.354 ******* >2018-10-02 08:55:41,189 p=1004 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:55:41,922 p=1004 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:55:45,720 p=1004 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:55:45,746 p=1004 u=mistral | TASK [Debug output for task: Run puppet host configuration for step 4] ********* >2018-10-02 08:55:45,747 p=1004 u=mistral | Tuesday 02 October 2018 08:55:45 -0400 (0:00:21.126) 0:26:58.480 ******* >2018-10-02 08:55:45,812 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.43 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller4]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}2bbfbbca55836d11d3166c6ef25cc69b'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 9.39 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 225", > " Out of sync: 8", > " Changed: 8", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " File line: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl: 0.00", > " Sysctl runtime: 0.00", > " Augeas: 0.02", > " Firewall: 0.02", > " File: 0.32", > " Pcmk property: 0.40", > " Pcmk resource default: 0.41", > " Package: 0.42", > " Service: 0.78", > " Total: 12.41", > " Last run: 1538484945", > " Config retrieval: 4.10", > " Exec: 5.93", > "Version:", > " Config: 1538484931", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 40]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:55:45,837 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 2.18 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage4]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}1bfd209f5d43966d93af5fb810c46720'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 6.91 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 143", > " Out of sync: 8", > " Changed: 8", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Firewall: 0.01", > " Sysctl: 0.01", > " Augeas: 0.02", > " File: 0.17", > " Package: 0.29", > " Service: 0.60", > " Last run: 1538484940", > " Config retrieval: 2.51", > " Exec: 5.23", > " Total: 8.84", > " Concat fragment: 0.00", > "Version:", > " Config: 1538484931", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 38]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:55:45,866 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.74 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Compute::Libvirt_guests/File[/etc/systemd/system/virt-guest-shutdown.target.wants]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}747aec6803150d4bff5e901caab6bde1'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Compute::Libvirt_guests/Systemd::Unit_file[paunch-container-shutdown.service]/File[/etc/systemd/system/virt-guest-shutdown.target.wants/paunch-container-shutdown.service]/ensure: created", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Nova::Compute::Libvirt_guests/File_line[/etc/sysconfig/libvirt-guests ON_BOOT]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt_guests/File_line[/etc/sysconfig/libvirt-guests ON_SHUTDOWN]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt_guests/File_line[/etc/sysconfig/libvirt-guests SHUTDOWN_TIMEOUT]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt_guests/Nova::Generic_service[libvirt-guests]/Service[nova-libvirt-guests]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 7.09 seconds", > "Changes:", > " Total: 13", > "Events:", > " Success: 13", > "Resources:", > " Corrective change: 1", > " Changed: 13", > " Out of sync: 13", > " Total: 173", > " Restarted: 4", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " File line: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.18", > " Package: 0.43", > " Service: 0.51", > " Last run: 1538484941", > " Config retrieval: 3.15", > " Exec: 5.24", > " Filebucket: 0.00", > " Total: 9.55", > "Version:", > " Config: 1538484931", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 39]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > "Warning: Unknown variable: 'service_ensure'. at /etc/puppet/modules/nova/manifests/generic_service.pp:68:20", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:55:45,895 p=1004 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 4] ***************** >2018-10-02 08:55:45,895 p=1004 u=mistral | Tuesday 02 October 2018 08:55:45 -0400 (0:00:00.148) 0:26:58.628 ******* >2018-10-02 08:55:45,928 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:45,956 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:45,970 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:45,998 p=1004 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (generate config) during step 4] *** >2018-10-02 08:55:45,999 p=1004 u=mistral | Tuesday 02 October 2018 08:55:45 -0400 (0:00:00.103) 0:26:58.732 ******* >2018-10-02 08:55:46,031 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:55:46,057 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:55:46,071 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:55:46,097 p=1004 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 08:55:46,097 p=1004 u=mistral | Tuesday 02 October 2018 08:55:46 -0400 (0:00:00.098) 0:26:58.830 ******* >2018-10-02 08:55:46,128 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:46,156 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:46,177 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:55:46,207 p=1004 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 08:55:46,207 p=1004 u=mistral | Tuesday 02 October 2018 08:55:46 -0400 (0:00:00.110) 0:26:58.941 ******* >2018-10-02 08:55:46,241 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:55:46,268 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:55:46,282 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:55:46,311 p=1004 u=mistral | TASK [Start containers for step 4] ********************************************* >2018-10-02 08:55:46,311 p=1004 u=mistral | Tuesday 02 October 2018 08:55:46 -0400 (0:00:00.104) 0:26:59.045 ******* >2018-10-02 08:55:47,125 p=1004 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:55:50,655 p=1004 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:13,597 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:13,633 p=1004 u=mistral | TASK [Debug output for task: Start containers for step 4] ********************** >2018-10-02 08:56:13,634 p=1004 u=mistral | Tuesday 02 October 2018 08:56:13 -0400 (0:00:27.322) 0:27:26.367 ******* >2018-10-02 08:56:13,712 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "70c8ade901ba: Already exists", > "d6ebc8849ca4: Pulling fs layer", > "d6ebc8849ca4: Verifying Checksum", > "d6ebc8849ca4: Download complete", > "d6ebc8849ca4: Pull complete", > "Digest: sha256:c30e9025980b366bce6028b8d04991d9aba3d596895f44f362e3c35cd229409f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-26.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-listener ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-listener", > "2a011d08f65a: Pulling fs layer", > "2a011d08f65a: Download complete", > "2a011d08f65a: Pull complete", > "Digest: sha256:1da5805af3813e8526137c0114d655230e888e31f5981fd6127ae98a511e7e4f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-notifier ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-notifier", > "1b37c4e063a7: Pulling fs layer", > "1b37c4e063a7: Verifying Checksum", > "1b37c4e063a7: Download complete", > "1b37c4e063a7: Pull complete", > "Digest: sha256:424b256111261e3e44504680e92214c2e911dbb05c7ceb69663503cae2de362f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent", > "f3c66d22e08b: Already exists", > "1ee941194c83: Pulling fs layer", > "1ee941194c83: Verifying Checksum", > "1ee941194c83: Download complete", > "1ee941194c83: Pull complete", > "Digest: sha256:5fadf9b91aff51d8ab5f24ee51a0a010d3727c7e57879ac0ccdc0814214295b0", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent", > "fe519f249698: Pulling fs layer", > "fe519f249698: Verifying Checksum", > "fe519f249698: Download complete", > "fe519f249698: Pull complete", > "Digest: sha256:e3061b27a64e8035cd357533dd777f1cd445409ca771c554f63fb620ba83fd53", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-conductor ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-conductor", > "9e28a9d49d0f: Already exists", > "cb692ee62716: Pulling fs layer", > "cb692ee62716: Verifying Checksum", > "cb692ee62716: Download complete", > "cb692ee62716: Pull complete", > "Digest: sha256:a54110f587fa16c227648b98bdeb23fc344c9a19469cc176d7a0dffa4e3c8a19", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth", > "27ace23a5752: Pulling fs layer", > "27ace23a5752: Verifying Checksum", > "27ace23a5752: Download complete", > "27ace23a5752: Pull complete", > "Digest: sha256:3ee3569d8d3af79663a38fc24cd21c1a0201c4e9b60667beaeff3cf3da9d0063", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy", > "6362a106a8d8: Pulling fs layer", > "6362a106a8d8: Verifying Checksum", > "6362a106a8d8: Download complete", > "6362a106a8d8: Pull complete", > "Digest: sha256:06e086a9462ccaf488f6d90e4752d401e631a2a9bc9d04d0087b6be64a5a1148", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-scheduler ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-scheduler", > "e1f8124f27e8: Pulling fs layer", > "e1f8124f27e8: Verifying Checksum", > "e1f8124f27e8: Download complete", > "e1f8124f27e8: Pull complete", > "Digest: sha256:489f578562bb596266aad6b797c244bc07ad6c2f7b6bd37808a68ff20a2a1622", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-engine ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-engine", > "8699899a971e: Already exists", > "a9e6848490f8: Pulling fs layer", > "a9e6848490f8: Verifying Checksum", > "a9e6848490f8: Download complete", > "a9e6848490f8: Pull complete", > "Digest: sha256:3f5d191573344ad508dcc60fbb538f5d81f85300a8d2d3f96c8cfe24a4c1e9a5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-26.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-container ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-container", > "119515329f22: Already exists", > "88e05048c7d2: Pulling fs layer", > "88e05048c7d2: Verifying Checksum", > "88e05048c7d2: Download complete", > "88e05048c7d2: Pull complete", > "Digest: sha256:0489aed30ce723a0bf5614a6ca9e5687013612e095c66dfb18cf34960d3e4ed1", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", > "stdout: e2acc348211c5e27777f8f2506efa5766fbe97972696fc0e22fa90df7ade7e16", > "stdout: 5e799a5d36ad64346bae15370fce4818d2767702fbb6970ded133344fd934b55", > "stdout: 8212388b947aeb2d5ae0d35b774cbf3e812b5cb41748fd5e0b1a1ab1ed121c7b", > "stdout: 94fec4d857b0f46ede5055db04067b36078170b0e3ca754e5b9bd3dda65deb23", > "stdout: 04ab54db104e117113a5477dee801903bc8bd042a1e5cd052247c29531f65066", > "stdout: 1435d512732b5a5f26decd33f0a78746489244f1e004b21db7d7724f25c0aa3a", > "stdout: 23dfc188f7970c0d0887ce882da7076b1462f2b0031ef2620695f8515d6fa846", > "stdout: 96c321791e1acb563bcbc24491bdfe79f3cda7e2d560d036b505b40a131ac00b", > "stdout: 471318a70be73103386e48d7d064db3144c365e147af64786aa25749caad9245", > "stdout: ef23696fedcb7b676188c31a690712450c3149b90562210502df4257ece38d5d", > "stdout: 98b6a35d44e459588d69171f8e4fb50be345c8c97b69c2619552a2957a3eaa26", > "stdout: d8ef5d5811ed5d32f3cb30109b585944aac381e70cb6c228b57473963129ff22", > "stdout: b52e7d83bbb39d0482003e5a78323d4ab713968b91db2408bb0ca44ffe2a9b48", > "stdout: 09f5e85417d81c7f360b194c7610ba6cca05f0d05c09986397ebc2f5ea9a1636", > "stdout: f39bb84f9830d40d9b4b38a8a5c52025305ac2e7eca0670fce5496aa29e84272", > "stdout: 8c4e44b51bc3f54c92ab76f5f5b6b90791b929f4a5917e4e1fbd1a158886cb63", > "stdout: dec6cbfa93fa786e85e576cd7ed7a412b90245e75882c2027834f5c7a8a3890f", > "stdout: dcbd29672f57b1361027ab1615bed839eddb941e22dfd87d3e19d87ab02dfa21", > "stdout: 7182062b16e9935e40a9c6e364ad535bd035a9ac9fc4d5059ed768ee6c6d522b", > "stdout: 8146026deeef3308022c3afd81f62b59a1c4c9c8e930d07784c3425fae6a4444", > "stdout: 27e4150fd3e1304da8fcdaddadb0aa44d27ca70df277743ed1a282357d7bf303", > "stdout: c87846cbe7b3bb5496a89be12a7a0f08281cc9f9eab46b300ee57b3a5bdfc73a", > "stdout: fc6e19cb2d504b7ca4d26daf9c62ab108008318b9a3d0f64e7edc781002d9f48", > "stdout: f0ac07f536a62d3e7571bfce8b359329b884548396ecb35fa2a7d2f30b6e7245", > "stdout: a0ef47f21785df6cc3b36c50c8b9a04bf1383a51d78b5b28852cd1a40e888e0c", > "stdout: 0bf6099a0e2be5786ae8bd812329686bff79a7237aa6d03da0946799ff76da3e", > "stdout: 12fb32f5355146641f620b3a4cf73c2b2e5d37e15d382a2225610d4fbea8f36d", > "stdout: a9b59d833ae81e7545b65d0c6d595e1bca35455fe222790ae562000c15b4c60e", > "stdout: 0fa5d62cb410487a1ceee407a7cbf830c45b0657337ad424a6c7d8e16a9d433d", > "stdout: c7d1dcefc886855439ed3a10ff2e224347de6f8351d8ff04b8be36aa4d2f023c", > "stdout: a3749d31a95ab901833842e952b7d36e439d0cc1dcce395af88d0b04ee0fd350", > "stdout: fc311cf68424cf3ea7cf367b6eb89f66f76a8007dfa11f43c6c2006dc5653161", > "stdout: e666e4a034d4ac3052d3ef3758dbc53b5a43d7d0e95d234dca8a8e6e81ebf70e", > "stdout: 666b4b61c158c9112aebdabf3baf920c873cf07de9f44c36bedd284e609dba17", > "stdout: 17c7b29dd93d2a75a934daac9588c2583172d8eafb7ccccb34b77da3cb69632f", > "stdout: ", > "stdout: ebcc664da00f2279b561368b5a030b297408223da616f31d13e278c70f47d147", > "stdout: fb5c2fbc3365a22aaecd48e4ff7c546d16248ad11350141d7f9400573f2adc26", > "stdout: f0770e1f1866250eb33f7d6765fd9e7a1879e187d3418757522ada0eac86bd02", > "stdout: 86aa8fbe76cb2a220ba7af90a8ade918ca2d91e385271257099765ce45192f0a", > "stdout: 75aea6db31ac6940db1a7c723e01122e1eda1348b470ff3b7a0844a681178c1d", > "stdout: 3c73f049064eeb151d66fee8b2e7d71f716c920f915f724e8bfb35b946253d71", > "stdout: 1c1d332403ca3e5695e55b413ef7a413b68d75fbc0f7df426f792f367c1fe46f", > "stdout: 7858e42d2fc213728f0602bd3ae2133d4f1eb98ef9fda35f534030c26fa86d25", > "stdout: 4cc0292e35be1a1e94ce63d9bee0f364a0bbcc5fee4b10151542cf2b164c6b28" > ] >} >2018-10-02 08:56:13,731 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: 10024d67bbc5706a10fba61842f7916859f484a8a37d3e31a0bbb69d5edd4344", > "", > "stderr: " > ] >} >2018-10-02 08:56:13,762 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "5fcda0d83a5e: Already exists", > "e31c54103440: Pulling fs layer", > "e31c54103440: Verifying Checksum", > "e31c54103440: Download complete", > "e31c54103440: Pull complete", > "Digest: sha256:c3b75eab795178714aa2fcc672436c12ce218ab1bf2960e601bec3d0a865769b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-26.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent", > "f3c66d22e08b: Already exists", > "fe519f249698: Pulling fs layer", > "fe519f249698: Verifying Checksum", > "fe519f249698: Download complete", > "fe519f249698: Pull complete", > "Digest: sha256:e3061b27a64e8035cd357533dd777f1cd445409ca771c554f63fb620ba83fd53", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", > "stdout: 69646cca73b80f12fae3d386ce4f3772ce3bc410b51136196840d41d4f720276", > "stdout: 330968dfbccd6e063c634d7b4f26f00e8ef80868aa3d386d45f4fc950451b647", > "stdout: ecc434be1ef2b7663608effa374e7871d4b79a587830954c5e53addb9a533426", > "stdout: Secret 4398e5b0-c63c-11e8-b95a-525400c8bd81 created", > "Secret value set", > "stdout: db0e011999b3467e265479b0513051618035fc6fbba31d5e1e0d37d5a00f0c6b", > "stdout: ed99da70ada58f4a9f4b707d2ded826ccb53c7e681578c2f96c94cea910a393f" > ] >} >2018-10-02 08:56:13,793 p=1004 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks4.json exists] ******** >2018-10-02 08:56:13,793 p=1004 u=mistral | Tuesday 02 October 2018 08:56:13 -0400 (0:00:00.159) 0:27:26.527 ******* >2018-10-02 08:56:14,061 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:56:14,097 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:56:14,132 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1538483622.1932342, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "54032a2f094e88383168daf9a4c4272527eb58c2", "ctime": 1538483622.2002342, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 65012131, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0600", "mtime": 1538483622.005234, "nlink": 1, "path": "/var/lib/docker-puppet/docker-puppet-tasks4.json", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 399, "uid": 0, "version": "1056845174", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-10-02 08:56:14,163 p=1004 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 4] ******************** >2018-10-02 08:56:14,163 p=1004 u=mistral | Tuesday 02 October 2018 08:56:14 -0400 (0:00:00.369) 0:27:26.897 ******* >2018-10-02 08:56:14,234 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:14,250 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:38,251 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:38,282 p=1004 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (bootstrap tasks) for step 4] *** >2018-10-02 08:56:38,282 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:24.118) 0:27:51.015 ******* >2018-10-02 08:56:38,349 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:56:38,365 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:56:38,422 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-10-02 12:56:14,497 INFO: 115640 -- Running docker-puppet", > "2018-10-02 12:56:14,498 INFO: 115640 -- Service compilation completed.", > "2018-10-02 12:56:14,498 INFO: 115640 -- Starting multiprocess configuration steps. Using 8 processes.", > "2018-10-02 12:56:14,542 INFO: 115649 -- Starting configuration of cinder_init_tasks using image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 12:56:14,545 INFO: 115649 -- Removing container: docker-puppet-cinder_init_tasks", > "2018-10-02 12:56:14,681 INFO: 115649 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 12:56:38,091 INFO: 115649 -- Removing container: docker-puppet-cinder_init_tasks", > "2018-10-02 12:56:38,155 INFO: 115649 -- Finished processing puppet configs for cinder_init_tasks" > ] >} >2018-10-02 08:56:38,432 p=1004 u=mistral | PLAY [External deployment step 5] ********************************************** >2018-10-02 08:56:38,452 p=1004 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-10-02 08:56:38,452 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.169) 0:27:51.185 ******* >2018-10-02 08:56:38,475 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,490 p=1004 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-10-02 08:56:38,490 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.038) 0:27:51.223 ******* >2018-10-02 08:56:38,522 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,530 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,540 p=1004 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,556 p=1004 u=mistral | TASK [generate inventory] ****************************************************** >2018-10-02 08:56:38,556 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.066) 0:27:51.290 ******* >2018-10-02 08:56:38,578 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,594 p=1004 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-10-02 08:56:38,595 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.038) 0:27:51.328 ******* >2018-10-02 08:56:38,618 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,635 p=1004 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-10-02 08:56:38,636 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.041) 0:27:51.369 ******* >2018-10-02 08:56:38,664 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,679 p=1004 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-10-02 08:56:38,679 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.043) 0:27:51.412 ******* >2018-10-02 08:56:38,713 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,729 p=1004 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-10-02 08:56:38,729 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.049) 0:27:51.462 ******* >2018-10-02 08:56:38,752 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,768 p=1004 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-10-02 08:56:38,768 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.038) 0:27:51.501 ******* >2018-10-02 08:56:38,791 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,808 p=1004 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-10-02 08:56:38,809 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.040) 0:27:51.542 ******* >2018-10-02 08:56:38,834 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,852 p=1004 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-10-02 08:56:38,852 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.043) 0:27:51.585 ******* >2018-10-02 08:56:38,878 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,895 p=1004 u=mistral | TASK [set ceph-ansible params from Heat] *************************************** >2018-10-02 08:56:38,895 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.042) 0:27:51.628 ******* >2018-10-02 08:56:38,922 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,938 p=1004 u=mistral | TASK [set ceph-ansible playbooks] ********************************************** >2018-10-02 08:56:38,938 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.043) 0:27:51.672 ******* >2018-10-02 08:56:38,963 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:38,977 p=1004 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-10-02 08:56:38,977 p=1004 u=mistral | Tuesday 02 October 2018 08:56:38 -0400 (0:00:00.039) 0:27:51.711 ******* >2018-10-02 08:56:39,000 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,013 p=1004 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-10-02 08:56:39,014 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.036) 0:27:51.747 ******* >2018-10-02 08:56:39,044 p=1004 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,061 p=1004 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-10-02 08:56:39,061 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.047) 0:27:51.795 ******* >2018-10-02 08:56:39,084 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,099 p=1004 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-10-02 08:56:39,099 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.038) 0:27:51.833 ******* >2018-10-02 08:56:39,121 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,142 p=1004 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-10-02 08:56:39,143 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.043) 0:27:51.876 ******* >2018-10-02 08:56:39,169 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,184 p=1004 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-10-02 08:56:39,184 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.041) 0:27:51.917 ******* >2018-10-02 08:56:39,216 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,235 p=1004 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 08:56:39,235 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.051) 0:27:51.968 ******* >2018-10-02 08:56:39,258 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,273 p=1004 u=mistral | TASK [Create temp file for prepare parameter] ********************************** >2018-10-02 08:56:39,273 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.037) 0:27:52.006 ******* >2018-10-02 08:56:39,294 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,308 p=1004 u=mistral | TASK [Create temp file for role data] ****************************************** >2018-10-02 08:56:39,308 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.035) 0:27:52.041 ******* >2018-10-02 08:56:39,330 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,344 p=1004 u=mistral | TASK [Write ContainerImagePrepare parameter file] ****************************** >2018-10-02 08:56:39,344 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.036) 0:27:52.077 ******* >2018-10-02 08:56:39,368 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,381 p=1004 u=mistral | TASK [Write role data file] **************************************************** >2018-10-02 08:56:39,382 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.037) 0:27:52.115 ******* >2018-10-02 08:56:39,412 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,425 p=1004 u=mistral | TASK [Run tripleo-container-image-prepare] ************************************* >2018-10-02 08:56:39,425 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.043) 0:27:52.159 ******* >2018-10-02 08:56:39,449 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,465 p=1004 u=mistral | TASK [Delete param file] ******************************************************* >2018-10-02 08:56:39,466 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.040) 0:27:52.199 ******* >2018-10-02 08:56:39,489 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,502 p=1004 u=mistral | TASK [Delete role file] ******************************************************** >2018-10-02 08:56:39,502 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.036) 0:27:52.236 ******* >2018-10-02 08:56:39,523 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,537 p=1004 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-10-02 08:56:39,537 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.034) 0:27:52.270 ******* >2018-10-02 08:56:39,556 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,570 p=1004 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-10-02 08:56:39,570 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.032) 0:27:52.303 ******* >2018-10-02 08:56:39,595 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,608 p=1004 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-10-02 08:56:39,608 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.038) 0:27:52.341 ******* >2018-10-02 08:56:39,626 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,639 p=1004 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-10-02 08:56:39,639 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.031) 0:27:52.372 ******* >2018-10-02 08:56:39,657 p=1004 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,662 p=1004 u=mistral | PLAY [Overcloud deploy step tasks for 5] *************************************** >2018-10-02 08:56:39,670 p=1004 u=mistral | PLAY [Overcloud common deploy step tasks 5] ************************************ >2018-10-02 08:56:39,700 p=1004 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-10-02 08:56:39,700 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.060) 0:27:52.433 ******* >2018-10-02 08:56:39,731 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,762 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,776 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,806 p=1004 u=mistral | TASK [Delete existing /var/lib/tripleo-config/check-mode directory for check mode] *** >2018-10-02 08:56:39,806 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.105) 0:27:52.539 ******* >2018-10-02 08:56:39,842 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,874 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,891 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:39,965 p=1004 u=mistral | TASK [Create /var/lib/tripleo-config/check-mode directory for check mode] ****** >2018-10-02 08:56:39,965 p=1004 u=mistral | Tuesday 02 October 2018 08:56:39 -0400 (0:00:00.159) 0:27:52.699 ******* >2018-10-02 08:56:40,000 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,028 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,044 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,071 p=1004 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-10-02 08:56:40,072 p=1004 u=mistral | Tuesday 02 October 2018 08:56:40 -0400 (0:00:00.106) 0:27:52.805 ******* >2018-10-02 08:56:40,106 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,147 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,161 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,198 p=1004 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 08:56:40,199 p=1004 u=mistral | Tuesday 02 October 2018 08:56:40 -0400 (0:00:00.126) 0:27:52.932 ******* >2018-10-02 08:56:40,231 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,260 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,274 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,300 p=1004 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 08:56:40,301 p=1004 u=mistral | Tuesday 02 October 2018 08:56:40 -0400 (0:00:00.102) 0:27:53.034 ******* >2018-10-02 08:56:40,338 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:56:40,368 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:56:40,381 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:56:40,408 p=1004 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-10-02 08:56:40,408 p=1004 u=mistral | Tuesday 02 October 2018 08:56:40 -0400 (0:00:00.107) 0:27:53.142 ******* >2018-10-02 08:56:40,442 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,471 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,486 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,513 p=1004 u=mistral | TASK [Delete existing /var/lib/docker-puppet/check-mode for check mode] ******** >2018-10-02 08:56:40,514 p=1004 u=mistral | Tuesday 02 October 2018 08:56:40 -0400 (0:00:00.105) 0:27:53.247 ******* >2018-10-02 08:56:40,548 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,578 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,591 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,619 p=1004 u=mistral | TASK [Create /var/lib/docker-puppet/check-mode for check mode] ***************** >2018-10-02 08:56:40,619 p=1004 u=mistral | Tuesday 02 October 2018 08:56:40 -0400 (0:00:00.105) 0:27:53.353 ******* >2018-10-02 08:56:40,650 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,681 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,693 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,720 p=1004 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-10-02 08:56:40,720 p=1004 u=mistral | Tuesday 02 October 2018 08:56:40 -0400 (0:00:00.100) 0:27:53.453 ******* >2018-10-02 08:56:40,751 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,777 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,790 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,816 p=1004 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 08:56:40,817 p=1004 u=mistral | Tuesday 02 October 2018 08:56:40 -0400 (0:00:00.096) 0:27:53.550 ******* >2018-10-02 08:56:40,851 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,879 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,891 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:40,917 p=1004 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 08:56:40,917 p=1004 u=mistral | Tuesday 02 October 2018 08:56:40 -0400 (0:00:00.100) 0:27:53.650 ******* >2018-10-02 08:56:40,947 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:56:40,978 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:56:40,998 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:56:41,029 p=1004 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-10-02 08:56:41,029 p=1004 u=mistral | Tuesday 02 October 2018 08:56:41 -0400 (0:00:00.112) 0:27:53.762 ******* >2018-10-02 08:56:41,064 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,094 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,107 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,134 p=1004 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-10-02 08:56:41,134 p=1004 u=mistral | Tuesday 02 October 2018 08:56:41 -0400 (0:00:00.105) 0:27:53.867 ******* >2018-10-02 08:56:41,167 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,197 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,210 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,238 p=1004 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-10-02 08:56:41,239 p=1004 u=mistral | Tuesday 02 October 2018 08:56:41 -0400 (0:00:00.104) 0:27:53.972 ******* >2018-10-02 08:56:41,303 p=1004 u=mistral | skipping: [controller-0] => (item=create_swift_secret.sh) => {"changed": false, "item": ["create_swift_secret.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,305 p=1004 u=mistral | skipping: [controller-0] => (item=docker_puppet_apply.sh) => {"changed": false, "item": ["docker_puppet_apply.sh", {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,307 p=1004 u=mistral | skipping: [controller-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": false, "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,309 p=1004 u=mistral | skipping: [controller-0] => (item=nova_api_discover_hosts.sh) => {"changed": false, "item": ["nova_api_discover_hosts.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,316 p=1004 u=mistral | skipping: [controller-0] => (item=nova_api_ensure_default_cell.sh) => {"changed": false, "item": ["nova_api_ensure_default_cell.sh", {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,321 p=1004 u=mistral | skipping: [controller-0] => (item=set_swift_keymaster_key_id.sh) => {"changed": false, "item": ["set_swift_keymaster_key_id.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,350 p=1004 u=mistral | skipping: [compute-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": false, "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,357 p=1004 u=mistral | skipping: [compute-0] => (item=nova_statedir_ownership.py) => {"changed": false, "item": ["nova_statedir_ownership.py", {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,383 p=1004 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-10-02 08:56:41,383 p=1004 u=mistral | Tuesday 02 October 2018 08:56:41 -0400 (0:00:00.144) 0:27:54.117 ******* >2018-10-02 08:56:41,454 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,457 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,458 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,459 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,460 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,461 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,461 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,461 p=1004 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,462 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,463 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,463 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,469 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,475 p=1004 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,476 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,479 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,486 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,491 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,497 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,509 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,510 p=1004 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,512 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,538 p=1004 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-10-02 08:56:41,538 p=1004 u=mistral | Tuesday 02 October 2018 08:56:41 -0400 (0:00:00.154) 0:27:54.271 ******* >2018-10-02 08:56:41,569 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,596 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,612 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:56:41,638 p=1004 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-10-02 08:56:41,638 p=1004 u=mistral | Tuesday 02 October 2018 08:56:41 -0400 (0:00:00.099) 0:27:54.371 ******* >2018-10-02 08:56:41,669 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,695 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,710 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,735 p=1004 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-10-02 08:56:41,735 p=1004 u=mistral | Tuesday 02 October 2018 08:56:41 -0400 (0:00:00.097) 0:27:54.469 ******* >2018-10-02 08:56:41,803 p=1004 u=mistral | skipping: [ceph-0] => (item=step_1) => {"changed": false, "item": ["step_1", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,804 p=1004 u=mistral | skipping: [ceph-0] => (item=step_2) => {"changed": false, "item": ["step_2", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,808 p=1004 u=mistral | skipping: [ceph-0] => (item=step_3) => {"changed": false, "item": ["step_3", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,810 p=1004 u=mistral | skipping: [ceph-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,818 p=1004 u=mistral | skipping: [controller-0] => (item=step_1) => {"changed": false, "item": ["step_1", {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=mG0FjSjrDN8mWwf9YJSsEJGuQ", "DB_ROOT_PASSWORD=5BSzxzKG9a"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=fbxKGjRmnA14UIbGdAmW"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,819 p=1004 u=mistral | skipping: [ceph-0] => (item=step_5) => {"changed": false, "item": ["step_5", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,822 p=1004 u=mistral | skipping: [ceph-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,827 p=1004 u=mistral | skipping: [controller-0] => (item=step_2) => {"changed": false, "item": ["step_2", {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,829 p=1004 u=mistral | skipping: [compute-0] => (item=step_1) => {"changed": false, "item": ["step_1", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,838 p=1004 u=mistral | skipping: [controller-0] => (item=step_3) => {"changed": false, "item": ["step_3", {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "Q4TKZfrksKpvC1QXOQA8ciL7S"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "user": "root", "volumes": ["/srv/node:/srv/node"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,840 p=1004 u=mistral | skipping: [compute-0] => (item=step_2) => {"changed": false, "item": ["step_2", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,847 p=1004 u=mistral | skipping: [compute-0] => (item=step_3) => {"changed": false, "item": ["step_3", {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,864 p=1004 u=mistral | skipping: [controller-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,871 p=1004 u=mistral | skipping: [compute-0] => (item=step_4) => {"changed": false, "item": ["step_4", {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '4398e5b0-c63c-11e8-b95a-525400c8bd81' --base64 'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,876 p=1004 u=mistral | skipping: [compute-0] => (item=step_5) => {"changed": false, "item": ["step_5", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,879 p=1004 u=mistral | skipping: [controller-0] => (item=step_5) => {"changed": false, "item": ["step_5", {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_api_online_migrations": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db online_data_migrations'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538482549"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}, "nova_online_migrations": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db online_data_migrations'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,881 p=1004 u=mistral | skipping: [controller-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,881 p=1004 u=mistral | skipping: [compute-0] => (item=step_6) => {"changed": false, "item": ["step_6", {}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,920 p=1004 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-10-02 08:56:41,920 p=1004 u=mistral | Tuesday 02 October 2018 08:56:41 -0400 (0:00:00.184) 0:27:54.653 ******* >2018-10-02 08:56:41,956 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:41,985 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,000 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,027 p=1004 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-10-02 08:56:42,027 p=1004 u=mistral | Tuesday 02 October 2018 08:56:42 -0400 (0:00:00.106) 0:27:54.760 ******* >2018-10-02 08:56:42,089 p=1004 u=mistral | skipping: [ceph-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,144 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_compute.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_compute.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,151 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,158 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,165 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,172 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova-migration-target.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova-migration-target.json", {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,179 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_compute.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_compute.json", {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,193 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_libvirt.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_libvirt.json", {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,196 p=1004 u=mistral | skipping: [compute-0] => (item=/var/lib/kolla/config_files/nova_virtlogd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_virtlogd.json", {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,260 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,267 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_evaluator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_evaluator.json", {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,273 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_listener.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_listener.json", {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,278 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/aodh_notifier.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/aodh_notifier.json", {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,284 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_central.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_central.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,289 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_notification.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/ceilometer_agent_notification.json", {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,296 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,303 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,308 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_backup.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_backup.json", {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,317 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_scheduler.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_scheduler.json", {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,320 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/cinder_volume.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/cinder_volume.json", {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,325 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/clustercheck.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/clustercheck.json", {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,331 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/glance_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/glance_api.json", {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,337 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/glance_api_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/glance_api_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,342 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,348 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_db_sync.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_db_sync.json", {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,354 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_metricd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_metricd.json", {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,359 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_statsd.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/gnocchi_statsd.json", {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,365 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/haproxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/haproxy.json", {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,371 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,376 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cfn.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api_cfn.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,382 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,388 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/heat_engine.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/heat_engine.json", {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,393 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/horizon.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/horizon.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,399 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,404 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/keystone.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/keystone.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,410 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/keystone_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/keystone_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,415 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,421 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/mysql.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/mysql.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,427 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_api.json", {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,435 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_dhcp.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_dhcp.json", {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,439 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_l3_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_l3_agent.json", {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,445 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_metadata_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_metadata_agent.json", {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,451 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,456 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/neutron_server_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/neutron_server_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,463 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,468 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_api_cron.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,474 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_conductor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_conductor.json", {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,480 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_consoleauth.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_consoleauth.json", {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,485 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_metadata.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_metadata.json", {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,492 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_placement.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_placement.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,497 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_scheduler.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_scheduler.json", {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,503 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/nova_vnc_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/nova_vnc_proxy.json", {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "root:nova", "path": "/etc/pki/tls/private/novnc_proxy.key"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,509 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/panko_api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/panko_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,516 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/rabbitmq.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/rabbitmq.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,520 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/redis.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/redis.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,527 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/redis_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/redis_tls_proxy.json", {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"optional": true, "owner": "root:root", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "root:root", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,532 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/sahara-api.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/sahara-api.json", {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,538 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/sahara-engine.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/sahara-engine.json", {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,544 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_auditor.json", {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,550 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_reaper.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_reaper.json", {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,555 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_replicator.json", {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,561 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_account_server.json", {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,566 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_auditor.json", {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,572 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_replicator.json", {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,577 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_server.json", {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,582 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_updater.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_container_updater.json", {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,590 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_auditor.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_auditor.json", {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,596 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_expirer.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_expirer.json", {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,601 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_replicator.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_replicator.json", {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,607 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_server.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_server.json", {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,612 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_updater.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_object_updater.json", {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,618 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_proxy.json", {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,623 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy_tls_proxy.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,629 p=1004 u=mistral | skipping: [controller-0] => (item=/var/lib/kolla/config_files/swift_rsync.json) => {"changed": false, "item": ["/var/lib/kolla/config_files/swift_rsync.json", {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,668 p=1004 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-10-02 08:56:42,668 p=1004 u=mistral | Tuesday 02 October 2018 08:56:42 -0400 (0:00:00.640) 0:27:55.401 ******* >2018-10-02 08:56:42,682 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:56:42,711 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:56:42,742 p=1004 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 08:56:42,774 p=1004 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-10-02 08:56:42,774 p=1004 u=mistral | Tuesday 02 October 2018 08:56:42 -0400 (0:00:00.106) 0:27:55.508 ******* >2018-10-02 08:56:42,842 p=1004 u=mistral | skipping: [controller-0] => (item=step_3) => {"changed": false, "item": ["step_3", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,844 p=1004 u=mistral | skipping: [controller-0] => (item=step_4) => {"changed": false, "item": ["step_4", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "config_volume": "cinder_init_tasks", "puppet_tags": "cinder_config,cinder_type,file,concat,file_line", "step_config": "include ::tripleo::profile::base::cinder::api", "volumes": ["/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro"]}]], "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,888 p=1004 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-10-02 08:56:42,889 p=1004 u=mistral | Tuesday 02 October 2018 08:56:42 -0400 (0:00:00.114) 0:27:55.622 ******* >2018-10-02 08:56:42,930 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,964 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:42,979 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:43,005 p=1004 u=mistral | TASK [Check for /etc/puppet/check-mode directory for check mode] *************** >2018-10-02 08:56:43,005 p=1004 u=mistral | Tuesday 02 October 2018 08:56:43 -0400 (0:00:00.116) 0:27:55.738 ******* >2018-10-02 08:56:43,035 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:43,064 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:43,075 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:43,101 p=1004 u=mistral | TASK [Create /etc/puppet/check-mode/hieradata directory for check mode] ******** >2018-10-02 08:56:43,101 p=1004 u=mistral | Tuesday 02 October 2018 08:56:43 -0400 (0:00:00.096) 0:27:55.834 ******* >2018-10-02 08:56:43,139 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:43,169 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:43,181 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:43,209 p=1004 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-10-02 08:56:43,210 p=1004 u=mistral | Tuesday 02 October 2018 08:56:43 -0400 (0:00:00.108) 0:27:55.943 ******* >2018-10-02 08:56:43,854 p=1004 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "039e0b234f00fbd1242930f0d5dc67e8b4c067fe", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "868a394a237b10c579b0c7ac25057be6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538485003.31-186994604406783/source", "state": "file", "uid": 0} >2018-10-02 08:56:43,874 p=1004 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "039e0b234f00fbd1242930f0d5dc67e8b4c067fe", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "868a394a237b10c579b0c7ac25057be6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538485003.27-200110033388153/source", "state": "file", "uid": 0} >2018-10-02 08:56:43,932 p=1004 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "039e0b234f00fbd1242930f0d5dc67e8b4c067fe", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "868a394a237b10c579b0c7ac25057be6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538485003.34-160171891056082/source", "state": "file", "uid": 0} >2018-10-02 08:56:43,958 p=1004 u=mistral | TASK [Create puppet check-mode files if they don't exist for check mode] ******* >2018-10-02 08:56:43,959 p=1004 u=mistral | Tuesday 02 October 2018 08:56:43 -0400 (0:00:00.748) 0:27:56.692 ******* >2018-10-02 08:56:43,991 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:44,019 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:44,030 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:56:44,066 p=1004 u=mistral | TASK [Run puppet host configuration for step 5] ******************************** >2018-10-02 08:56:44,066 p=1004 u=mistral | Tuesday 02 October 2018 08:56:44 -0400 (0:00:00.107) 0:27:56.800 ******* >2018-10-02 08:56:55,602 p=1004 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:56:56,622 p=1004 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:57:03,144 p=1004 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 08:57:03,172 p=1004 u=mistral | TASK [Debug output for task: Run puppet host configuration for step 5] ********* >2018-10-02 08:57:03,172 p=1004 u=mistral | Tuesday 02 October 2018 08:57:03 -0400 (0:00:19.105) 0:28:15.906 ******* >2018-10-02 08:57:03,244 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.49 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller5]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 4.97 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Changed: 2", > " Out of sync: 2", > " Total: 225", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " File line: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Augeas: 0.02", > " Firewall: 0.02", > " Service: 0.21", > " Pcmk property: 0.42", > " Pcmk resource default: 0.42", > " Package: 0.49", > " Exec: 1.00", > " File: 1.14", > " Last run: 1538485022", > " Config retrieval: 4.96", > " Total: 8.70", > " Filebucket: 0.00", > "Version:", > " Config: 1538485012", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 40]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:57:03,272 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 2.33 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage5]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.51 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 143", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.12", > " Service: 0.14", > " Exec: 0.22", > " Package: 0.37", > " Last run: 1538485015", > " Config retrieval: 2.75", > " Total: 3.64", > " Concat fragment: 0.00", > "Version:", > " Config: 1538485010", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 38]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:57:03,301 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.71 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute5]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Compute::Libvirt_guests/Systemd::Unit_file[paunch-container-shutdown.service]/File[/etc/systemd/system/virt-guest-shutdown.target.wants/paunch-container-shutdown.service]/seltype: seltype changed 'virtd_unit_file_t' to 'systemd_unit_file_t'", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: Applied catalog in 1.76 seconds", > "Changes:", > " Total: 3", > "Events:", > " Success: 3", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 173", > " Out of sync: 3", > " Changed: 3", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Anchor: 0.00", > " Sysctl: 0.01", > " Sysctl runtime: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " Service: 0.18", > " File: 0.21", > " Exec: 0.24", > " Package: 0.35", > " Last run: 1538485016", > " Config retrieval: 3.14", > " Total: 4.16", > " Filebucket: 0.00", > "Version:", > " Config: 1538485011", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 39]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > "Warning: Unknown variable: 'service_ensure'. at /etc/puppet/modules/nova/manifests/generic_service.pp:68:20", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 08:57:03,334 p=1004 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 5] ***************** >2018-10-02 08:57:03,334 p=1004 u=mistral | Tuesday 02 October 2018 08:57:03 -0400 (0:00:00.161) 0:28:16.067 ******* >2018-10-02 08:57:03,372 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:57:03,403 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:57:03,417 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:57:03,447 p=1004 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (generate config) during step 5] *** >2018-10-02 08:57:03,447 p=1004 u=mistral | Tuesday 02 October 2018 08:57:03 -0400 (0:00:00.112) 0:28:16.180 ******* >2018-10-02 08:57:03,482 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:57:03,512 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:57:03,527 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:57:03,555 p=1004 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 08:57:03,555 p=1004 u=mistral | Tuesday 02 October 2018 08:57:03 -0400 (0:00:00.108) 0:28:16.288 ******* >2018-10-02 08:57:03,589 p=1004 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:57:03,623 p=1004 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:57:03,638 p=1004 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 08:57:03,664 p=1004 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 08:57:03,665 p=1004 u=mistral | Tuesday 02 October 2018 08:57:03 -0400 (0:00:00.109) 0:28:16.398 ******* >2018-10-02 08:57:03,697 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:57:03,725 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:57:03,739 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:57:03,767 p=1004 u=mistral | TASK [Start containers for step 5] ********************************************* >2018-10-02 08:57:03,767 p=1004 u=mistral | Tuesday 02 October 2018 08:57:03 -0400 (0:00:00.102) 0:28:16.500 ******* >2018-10-02 08:57:04,308 p=1004 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:57:04,340 p=1004 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:59:05,717 p=1004 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:59:05,744 p=1004 u=mistral | TASK [Debug output for task: Start containers for step 5] ********************** >2018-10-02 08:59:05,744 p=1004 u=mistral | Tuesday 02 October 2018 08:59:05 -0400 (0:02:01.977) 0:30:18.477 ******* >2018-10-02 08:59:05,842 p=1004 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-10-02 08:59:05,870 p=1004 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-10-02 08:59:08,157 p=1004 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "d0a704666261: Already exists", > "4bbf5890fd78: Pulling fs layer", > "4bbf5890fd78: Verifying Checksum", > "4bbf5890fd78: Download complete", > "4bbf5890fd78: Pull complete", > "Digest: sha256:34c900f1153f98d7e975e9404131164bed2e6ee3ae6b2e44d8d1637d0e8f12b1", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-26.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd", > "0c26a758ebd3: Pulling fs layer", > "0c26a758ebd3: Verifying Checksum", > "0c26a758ebd3: Download complete", > "0c26a758ebd3: Pull complete", > "Digest: sha256:4f60fbf6e72a6354ba76456ab95608b8d99e5a87b25118fca6fceee7fdae33fc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-26.1", > "stdout: ", > "stderr: Error: unable to find resource 'openstack-cinder-backup'", > "stderr: Error: unable to find resource 'openstack-cinder-volume'", > "stdout: bc410cf9996e245f01789f23f42cc4204fa7fcf330d51b5865a030f9cf11827c", > "stdout: Running command: '/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128'", > "stderr: + sudo -E kolla_set_configs", > "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Deleting /etc/gnocchi/gnocchi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/gnocchi/gnocchi.conf to /etc/gnocchi/gnocchi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-gnocchi_wsgi.conf to /etc/httpd/conf.d/10-gnocchi_wsgi.conf", > "INFO:__main__:Deleting /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/ssl.conf to /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/access_compat.load to /etc/httpd/conf.modules.d/access_compat.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/actions.load to /etc/httpd/conf.modules.d/actions.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.conf to /etc/httpd/conf.modules.d/alias.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.load to /etc/httpd/conf.modules.d/alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_basic.load to /etc/httpd/conf.modules.d/auth_basic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_digest.load to /etc/httpd/conf.modules.d/auth_digest.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_anon.load to /etc/httpd/conf.modules.d/authn_anon.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_core.load to /etc/httpd/conf.modules.d/authn_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_dbm.load to /etc/httpd/conf.modules.d/authn_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_file.load to /etc/httpd/conf.modules.d/authn_file.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_core.load to /etc/httpd/conf.modules.d/authz_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_dbm.load to /etc/httpd/conf.modules.d/authz_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_groupfile.load to /etc/httpd/conf.modules.d/authz_groupfile.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_host.load to /etc/httpd/conf.modules.d/authz_host.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_owner.load to /etc/httpd/conf.modules.d/authz_owner.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_user.load to /etc/httpd/conf.modules.d/authz_user.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.conf to /etc/httpd/conf.modules.d/autoindex.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.load to /etc/httpd/conf.modules.d/autoindex.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cache.load to /etc/httpd/conf.modules.d/cache.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cgi.load to /etc/httpd/conf.modules.d/cgi.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav.load to /etc/httpd/conf.modules.d/dav.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.conf to /etc/httpd/conf.modules.d/dav_fs.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.load to /etc/httpd/conf.modules.d/dav_fs.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.conf to /etc/httpd/conf.modules.d/deflate.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.load to /etc/httpd/conf.modules.d/deflate.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.conf to /etc/httpd/conf.modules.d/dir.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.load to /etc/httpd/conf.modules.d/dir.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/env.load to /etc/httpd/conf.modules.d/env.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/expires.load to /etc/httpd/conf.modules.d/expires.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ext_filter.load to /etc/httpd/conf.modules.d/ext_filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/filter.load to /etc/httpd/conf.modules.d/filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/include.load to /etc/httpd/conf.modules.d/include.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/log_config.load to /etc/httpd/conf.modules.d/log_config.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/logio.load to /etc/httpd/conf.modules.d/logio.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.conf to /etc/httpd/conf.modules.d/mime.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.load to /etc/httpd/conf.modules.d/mime.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.conf to /etc/httpd/conf.modules.d/mime_magic.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.load to /etc/httpd/conf.modules.d/mime_magic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.conf to /etc/httpd/conf.modules.d/negotiation.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.load to /etc/httpd/conf.modules.d/negotiation.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.conf to /etc/httpd/conf.modules.d/prefork.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.load to /etc/httpd/conf.modules.d/prefork.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/rewrite.load to /etc/httpd/conf.modules.d/rewrite.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.conf to /etc/httpd/conf.modules.d/setenvif.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.load to /etc/httpd/conf.modules.d/setenvif.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/socache_shmcb.load to /etc/httpd/conf.modules.d/socache_shmcb.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/speling.load to /etc/httpd/conf.modules.d/speling.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ssl.load to /etc/httpd/conf.modules.d/ssl.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.conf to /etc/httpd/conf.modules.d/status.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.load to /etc/httpd/conf.modules.d/status.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/substitute.load to /etc/httpd/conf.modules.d/substitute.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/suexec.load to /etc/httpd/conf.modules.d/suexec.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/systemd.load to /etc/httpd/conf.modules.d/systemd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/unixd.load to /etc/httpd/conf.modules.d/unixd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/usertrack.load to /etc/httpd/conf.modules.d/usertrack.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/version.load to /etc/httpd/conf.modules.d/version.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/vhost_alias.load to /etc/httpd/conf.modules.d/vhost_alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.conf to /etc/httpd/conf.modules.d/wsgi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.load to /etc/httpd/conf.modules.d/wsgi.load", > "INFO:__main__:Deleting /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/httpd.conf to /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/ports.conf to /etc/httpd/conf/ports.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/tripleo.cnf to /etc/my.cnf.d/tripleo.cnf", > "INFO:__main__:Creating directory /etc/systemd/system/httpd.service.d", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/systemd/system/httpd.service.d/httpd.conf to /etc/systemd/system/httpd.service.d/httpd.conf", > "INFO:__main__:Creating directory /var/www/cgi-bin/gnocchi", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/gnocchi/app to /var/www/cgi-bin/gnocchi/app", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.conf to /etc/ceph/ceph.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.admin.keyring to /etc/ceph/ceph.client.admin.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mon.keyring to /etc/ceph/ceph.mon.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mgr.controller-0.keyring to /etc/ceph/ceph.mgr.controller-0.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.manila.keyring to /etc/ceph/ceph.client.manila.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.radosgw.keyring to /etc/ceph/ceph.client.radosgw.keyring", > "INFO:__main__:Writing out command to execute", > "INFO:__main__:Setting permission for /var/log/gnocchi", > "INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring", > "++ cat /run_command", > "+ CMD='/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128'", > "+ ARGS=", > "+ [[ ! -n '' ]]", > "+ . kolla_extend_start", > "++ GNOCCHI_LOG_DIR=/var/log/kolla/gnocchi", > "++ [[ ! -d /var/log/kolla/gnocchi ]]", > "++ mkdir -p /var/log/kolla/gnocchi", > "+++ stat -c %U:%G /var/log/kolla/gnocchi", > "++ [[ root:kolla != \\g\\n\\o\\c\\c\\h\\i\\:\\k\\o\\l\\l\\a ]]", > "++ chown gnocchi:kolla /var/log/kolla/gnocchi", > "+++ stat -c %a /var/log/kolla/gnocchi", > "++ [[ 2755 != \\7\\5\\5 ]]", > "++ chmod 755 /var/log/kolla/gnocchi", > "++ . /usr/local/bin/kolla_gnocchi_extend_start", > "+++ [[ rhel =~ debian|ubuntu ]]", > "+++ rm -rf /var/run/httpd/htcacheclean /run/httpd/htcacheclean '/tmp/httpd*'", > "+++ [[ -n '' ]]", > "+ echo 'Running command: '\\''/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128'\\'''", > "+ exec /usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", > "2018-10-02 12:57:09,717 [21] WARNING oslo_config.cfg: Deprecated: Option \"coordination_url\" from group \"storage\" is deprecated. Use option \"coordination_url\" from group \"DEFAULT\".", > "2018-10-02 12:57:09,717 [21] INFO gnocchi.service: Gnocchi version 4.3.2.dev7", > "2018-10-02 12:57:09,717 [21] DEBUG gnocchi.service: ********************************************************************************", > "2018-10-02 12:57:09,717 [21] DEBUG gnocchi.service: Configuration options gathered from:", > "2018-10-02 12:57:09,718 [21] DEBUG gnocchi.service: command line args: ['--sacks-number=128']", > "2018-10-02 12:57:09,718 [21] DEBUG gnocchi.service: config files: ['/usr/share/gnocchi/gnocchi-dist.conf', '/etc/gnocchi/gnocchi.conf']", > "2018-10-02 12:57:09,718 [21] DEBUG gnocchi.service: ================================================================================", > "2018-10-02 12:57:09,718 [21] DEBUG gnocchi.service: config_dir = []", > "2018-10-02 12:57:09,718 [21] DEBUG gnocchi.service: config_file = ['/usr/share/gnocchi/gnocchi-dist.conf', '/etc/gnocchi/gnocchi.conf']", > "2018-10-02 12:57:09,718 [21] DEBUG gnocchi.service: config_source = []", > "2018-10-02 12:57:09,718 [21] DEBUG gnocchi.service: coordination_url = ****", > "2018-10-02 12:57:09,718 [21] DEBUG gnocchi.service: debug = True", > "2018-10-02 12:57:09,718 [21] DEBUG gnocchi.service: log_dir = /var/log/gnocchi", > "2018-10-02 12:57:09,719 [21] DEBUG gnocchi.service: log_file = None", > "2018-10-02 12:57:09,719 [21] DEBUG gnocchi.service: parallel_operations = 8", > "2018-10-02 12:57:09,719 [21] DEBUG gnocchi.service: sacks_number = 128", > "2018-10-02 12:57:09,719 [21] DEBUG gnocchi.service: skip_archive_policies_creation = False", > "2018-10-02 12:57:09,719 [21] DEBUG gnocchi.service: skip_incoming = False", > "2018-10-02 12:57:09,719 [21] DEBUG gnocchi.service: skip_index = False", > "2018-10-02 12:57:09,719 [21] DEBUG gnocchi.service: skip_storage = False", > "2018-10-02 12:57:09,719 [21] DEBUG gnocchi.service: syslog_log_facility = user", > "2018-10-02 12:57:09,720 [21] DEBUG gnocchi.service: use_journal = False", > "2018-10-02 12:57:09,720 [21] DEBUG gnocchi.service: use_syslog = False", > "2018-10-02 12:57:09,720 [21] DEBUG gnocchi.service: verbose = True", > "2018-10-02 12:57:09,720 [21] DEBUG gnocchi.service: statsd.archive_policy_name = low", > "2018-10-02 12:57:09,720 [21] DEBUG gnocchi.service: statsd.creator = None", > "2018-10-02 12:57:09,720 [21] DEBUG gnocchi.service: statsd.flush_delay = 10.0", > "2018-10-02 12:57:09,720 [21] DEBUG gnocchi.service: statsd.host = 0.0.0.0", > "2018-10-02 12:57:09,721 [21] DEBUG gnocchi.service: statsd.port = 8125", > "2018-10-02 12:57:09,721 [21] DEBUG gnocchi.service: statsd.resource_id = 0a8b55df-f90f-491c-8cb9-7cdecec6fc26", > "2018-10-02 12:57:09,721 [21] DEBUG gnocchi.service: incoming.ceph_conffile = /etc/ceph/ceph.conf", > "2018-10-02 12:57:09,721 [21] DEBUG gnocchi.service: incoming.ceph_keyring = /etc/ceph/ceph.client.openstack.keyring", > "2018-10-02 12:57:09,722 [21] DEBUG gnocchi.service: incoming.ceph_pool = metrics", > "2018-10-02 12:57:09,722 [21] DEBUG gnocchi.service: incoming.ceph_secret = ****", > "2018-10-02 12:57:09,722 [21] DEBUG gnocchi.service: incoming.ceph_timeout = 30", > "2018-10-02 12:57:09,722 [21] DEBUG gnocchi.service: incoming.ceph_username = openstack", > "2018-10-02 12:57:09,723 [21] DEBUG gnocchi.service: incoming.driver = redis", > "2018-10-02 12:57:09,723 [21] DEBUG gnocchi.service: incoming.file_basepath = /var/lib/gnocchi", > "2018-10-02 12:57:09,723 [21] DEBUG gnocchi.service: incoming.file_subdir_len = 2", > "2018-10-02 12:57:09,723 [21] DEBUG gnocchi.service: incoming.redis_url = redis://:giTgoE6dqwKoDkmqbxtzK1FVH@172.17.1.26:6379/", > "2018-10-02 12:57:09,724 [21] DEBUG gnocchi.service: incoming.s3_access_key_id = ", > "2018-10-02 12:57:09,724 [21] DEBUG gnocchi.service: incoming.s3_bucket_prefix = gnocchi", > "2018-10-02 12:57:09,724 [21] DEBUG gnocchi.service: incoming.s3_check_consistency_timeout = 60.0", > "2018-10-02 12:57:09,725 [21] DEBUG gnocchi.service: incoming.s3_endpoint_url = ", > "2018-10-02 12:57:09,725 [21] DEBUG gnocchi.service: incoming.s3_max_pool_connections = 50", > "2018-10-02 12:57:09,725 [21] DEBUG gnocchi.service: incoming.s3_region_name = ", > "2018-10-02 12:57:09,725 [21] DEBUG gnocchi.service: incoming.s3_secret_access_key = ", > "2018-10-02 12:57:09,726 [21] DEBUG gnocchi.service: incoming.swift_auth_insecure = False", > "2018-10-02 12:57:09,726 [21] DEBUG gnocchi.service: incoming.swift_auth_version = 1", > "2018-10-02 12:57:09,726 [21] DEBUG gnocchi.service: incoming.swift_authurl = http://localhost:8080/auth/v1.0", > "2018-10-02 12:57:09,727 [21] DEBUG gnocchi.service: incoming.swift_cacert = ", > "2018-10-02 12:57:09,727 [21] DEBUG gnocchi.service: incoming.swift_container_prefix = gnocchi", > "2018-10-02 12:57:09,727 [21] DEBUG gnocchi.service: incoming.swift_endpoint_type = publicURL", > "2018-10-02 12:57:09,727 [21] DEBUG gnocchi.service: incoming.swift_key = ****", > "2018-10-02 12:57:09,728 [21] DEBUG gnocchi.service: incoming.swift_preauthtoken = ****", > "2018-10-02 12:57:09,728 [21] DEBUG gnocchi.service: incoming.swift_project_domain_name = Default", > "2018-10-02 12:57:09,728 [21] DEBUG gnocchi.service: incoming.swift_project_name = ", > "2018-10-02 12:57:09,729 [21] DEBUG gnocchi.service: incoming.swift_region = ", > "2018-10-02 12:57:09,729 [21] DEBUG gnocchi.service: incoming.swift_service_type = object-store", > "2018-10-02 12:57:09,729 [21] DEBUG gnocchi.service: incoming.swift_timeout = 300", > "2018-10-02 12:57:09,729 [21] DEBUG gnocchi.service: incoming.swift_url = ", > "2018-10-02 12:57:09,730 [21] DEBUG gnocchi.service: incoming.swift_user = admin:admin", > "2018-10-02 12:57:09,730 [21] DEBUG gnocchi.service: incoming.swift_user_domain_name = Default", > "2018-10-02 12:57:09,730 [21] DEBUG gnocchi.service: metricd.greedy = True", > "2018-10-02 12:57:09,730 [21] DEBUG gnocchi.service: metricd.metric_cleanup_delay = 300", > "2018-10-02 12:57:09,730 [21] DEBUG gnocchi.service: metricd.metric_processing_delay = 30", > "2018-10-02 12:57:09,730 [21] DEBUG gnocchi.service: metricd.metric_reporting_delay = 120", > "2018-10-02 12:57:09,730 [21] DEBUG gnocchi.service: metricd.processing_replicas = 3", > "2018-10-02 12:57:09,731 [21] DEBUG gnocchi.service: metricd.workers = 4", > "2018-10-02 12:57:09,731 [21] DEBUG gnocchi.service: database.backend = sqlalchemy", > "2018-10-02 12:57:09,731 [21] DEBUG gnocchi.service: database.connection = ****", > "2018-10-02 12:57:09,731 [21] DEBUG gnocchi.service: database.connection_debug = 0", > "2018-10-02 12:57:09,731 [21] DEBUG gnocchi.service: database.connection_parameters = ", > "2018-10-02 12:57:09,731 [21] DEBUG gnocchi.service: database.connection_recycle_time = 3600", > "2018-10-02 12:57:09,732 [21] DEBUG gnocchi.service: database.connection_trace = False", > "2018-10-02 12:57:09,732 [21] DEBUG gnocchi.service: database.db_inc_retry_interval = True", > "2018-10-02 12:57:09,732 [21] DEBUG gnocchi.service: database.db_max_retries = 20", > "2018-10-02 12:57:09,732 [21] DEBUG gnocchi.service: database.db_max_retry_interval = 10", > "2018-10-02 12:57:09,732 [21] DEBUG gnocchi.service: database.db_retry_interval = 1", > "2018-10-02 12:57:09,732 [21] DEBUG gnocchi.service: database.max_overflow = 50", > "2018-10-02 12:57:09,732 [21] DEBUG gnocchi.service: database.max_pool_size = 5", > "2018-10-02 12:57:09,733 [21] DEBUG gnocchi.service: database.max_retries = 10", > "2018-10-02 12:57:09,733 [21] DEBUG gnocchi.service: database.min_pool_size = 1", > "2018-10-02 12:57:09,733 [21] DEBUG gnocchi.service: database.mysql_enable_ndb = False", > "2018-10-02 12:57:09,733 [21] DEBUG gnocchi.service: database.mysql_sql_mode = TRADITIONAL", > "2018-10-02 12:57:09,733 [21] DEBUG gnocchi.service: database.pool_timeout = None", > "2018-10-02 12:57:09,733 [21] DEBUG gnocchi.service: database.retry_interval = 10", > "2018-10-02 12:57:09,733 [21] DEBUG gnocchi.service: database.slave_connection = ****", > "2018-10-02 12:57:09,734 [21] DEBUG gnocchi.service: database.sqlite_synchronous = True", > "2018-10-02 12:57:09,734 [21] DEBUG gnocchi.service: database.use_db_reconnect = False", > "2018-10-02 12:57:09,734 [21] DEBUG gnocchi.service: storage.ceph_conffile = /etc/ceph/ceph.conf", > "2018-10-02 12:57:09,734 [21] DEBUG gnocchi.service: storage.ceph_keyring = /etc/ceph/ceph.client.openstack.keyring", > "2018-10-02 12:57:09,734 [21] DEBUG gnocchi.service: storage.ceph_pool = metrics", > "2018-10-02 12:57:09,734 [21] DEBUG gnocchi.service: storage.ceph_secret = ****", > "2018-10-02 12:57:09,734 [21] DEBUG gnocchi.service: storage.ceph_timeout = 30", > "2018-10-02 12:57:09,735 [21] DEBUG gnocchi.service: storage.ceph_username = openstack", > "2018-10-02 12:57:09,735 [21] DEBUG gnocchi.service: storage.driver = ceph", > "2018-10-02 12:57:09,735 [21] DEBUG gnocchi.service: storage.file_basepath = /var/lib/gnocchi", > "2018-10-02 12:57:09,735 [21] DEBUG gnocchi.service: storage.file_subdir_len = 2", > "2018-10-02 12:57:09,735 [21] DEBUG gnocchi.service: storage.redis_url = redis://localhost:6379/", > "2018-10-02 12:57:09,735 [21] DEBUG gnocchi.service: storage.s3_access_key_id = None", > "2018-10-02 12:57:09,735 [21] DEBUG gnocchi.service: storage.s3_bucket_prefix = gnocchi", > "2018-10-02 12:57:09,735 [21] DEBUG gnocchi.service: storage.s3_check_consistency_timeout = 60.0", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.s3_endpoint_url = None", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.s3_max_pool_connections = 50", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.s3_region_name = None", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.s3_secret_access_key = None", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.swift_auth_insecure = False", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.swift_auth_version = 1", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.swift_authurl = http://localhost:8080/auth/v1.0", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.swift_cacert = None", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.swift_container_prefix = gnocchi", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.swift_endpoint_type = publicURL", > "2018-10-02 12:57:09,736 [21] DEBUG gnocchi.service: storage.swift_key = ****", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: storage.swift_preauthtoken = ****", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: storage.swift_project_domain_name = Default", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: storage.swift_project_name = None", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: storage.swift_region = None", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: storage.swift_service_type = object-store", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: storage.swift_timeout = 300", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: storage.swift_url = None", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: storage.swift_user = admin:admin", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: storage.swift_user_domain_name = Default", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: indexer.url = ****", > "2018-10-02 12:57:09,737 [21] DEBUG gnocchi.service: api.auth_mode = keystone", > "2018-10-02 12:57:09,738 [21] DEBUG gnocchi.service: api.host = 0.0.0.0", > "2018-10-02 12:57:09,738 [21] DEBUG gnocchi.service: api.max_limit = 1000", > "2018-10-02 12:57:09,738 [21] DEBUG gnocchi.service: api.operation_timeout = 10", > "2018-10-02 12:57:09,738 [21] DEBUG gnocchi.service: api.paste_config = api-paste.ini", > "2018-10-02 12:57:09,738 [21] DEBUG gnocchi.service: api.port = 8041", > "2018-10-02 12:57:09,738 [21] DEBUG gnocchi.service: api.uwsgi_mode = http", > "2018-10-02 12:57:09,738 [21] DEBUG gnocchi.service: archive_policy.default_aggregation_methods = ['mean', 'min', 'max', 'sum', 'std', 'count']", > "2018-10-02 12:57:09,739 [21] DEBUG gnocchi.service: ********************************************************************************", > "2018-10-02 12:57:10,108 [21] INFO gnocchi.cli.manage: Upgrading indexer SQLAlchemyIndexer: mysql+pymysql://gnocchi:qN0MFbuX5wk5IiO1Qtx1x8XRz@172.17.1.28/gnocchi?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf", > "2018-10-02 12:57:10,240 [21] INFO gnocchi.common.ceph: Ceph storage backend use 'cradox' python library", > "2018-10-02 12:57:10,281 [21] INFO gnocchi.cli.manage: Upgrading storage CephStorage: 4398e5b0-c63c-11e8-b95a-525400c8bd81", > "2018-10-02 12:57:10,283 [21] INFO gnocchi.cli.manage: Upgrading incoming storage RedisStorage: StrictRedis<ConnectionPool<Connection<host=172.17.1.26,port=6379,db=0>>>", > "stdout: Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=US-ASCII", > "Debug: Evicting cache entry for environment 'production'", > "Debug: Caching environment 'production' (ttl = 0 sec)", > "Debug: Loading external facts from /etc/puppet/modules/openstacklib/facts.d", > "Debug: Loading external facts from /var/lib/puppet/facts.d", > "Info: Loading facts", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /etc/puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/docker_group_gid.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /etc/puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /etc/puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/docker_group_gid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Facter: Found no suitable resolves of 1 for ec2_metadata", > "Debug: Facter: value for ec2_metadata is still nil", > "Debug: Executing: '/usr/bin/rpm --version'", > "Debug: Failed to load library 'cfpropertylist' for feature 'cfpropertylist'", > "Debug: Executing: '/usr/bin/rpm -ql rpm'", > "Debug: Facter: value for agent_specified_environment is still nil", > "Debug: Facter: value for cfkey is still nil", > "Debug: Facter: Found no suitable resolves of 1 for dhcp_servers", > "Debug: Facter: value for dhcp_servers is still nil", > "Debug: Facter: Found no suitable resolves of 1 for ec2_userdata", > "Debug: Facter: value for ec2_userdata is still nil", > "Debug: Facter: Found no suitable resolves of 1 for gce", > "Debug: Facter: value for gce is still nil", > "Debug: Facter: value for ipaddress6_br_ex is still nil", > "Debug: Facter: value for ipaddress_br_int is still nil", > "Debug: Facter: value for ipaddress6_br_int is still nil", > "Debug: Facter: value for netmask_br_int is still nil", > "Debug: Facter: value for ipaddress_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_br_isolated is still nil", > "Debug: Facter: value for netmask_br_isolated is still nil", > "Debug: Facter: value for ipaddress_br_tun is still nil", > "Debug: Facter: value for ipaddress6_br_tun is still nil", > "Debug: Facter: value for netmask_br_tun is still nil", > "Debug: Facter: value for ipaddress6_docker0 is still nil", > "Debug: Facter: value for ipaddress6_eth0 is still nil", > "Debug: Facter: value for ipaddress_eth1 is still nil", > "Debug: Facter: value for ipaddress6_eth1 is still nil", > "Debug: Facter: value for netmask_eth1 is still nil", > "Debug: Facter: value for ipaddress_eth2 is still nil", > "Debug: Facter: value for ipaddress6_eth2 is still nil", > "Debug: Facter: value for netmask_eth2 is still nil", > "Debug: Facter: value for ipaddress6_lo is still nil", > "Debug: Facter: value for macaddress_lo is still nil", > "Debug: Facter: value for ipaddress_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_ovs_system is still nil", > "Debug: Facter: value for netmask_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_vlan20 is still nil", > "Debug: Facter: value for ipaddress6_vlan30 is still nil", > "Debug: Facter: value for ipaddress6_vlan40 is still nil", > "Debug: Facter: value for ipaddress6_vlan50 is still nil", > "Debug: Facter: value for ipaddress6 is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iphostnumber", > "Debug: Facter: value for iphostnumber is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistcodename", > "Debug: Facter: value for lsbdistcodename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistdescription", > "Debug: Facter: value for lsbdistdescription is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistid", > "Debug: Facter: value for lsbdistid is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistrelease", > "Debug: Facter: value for lsbdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbmajdistrelease", > "Debug: Facter: value for lsbmajdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbminordistrelease", > "Debug: Facter: value for lsbminordistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbrelease", > "Debug: Facter: value for lsbrelease is still nil", > "Debug: Facter: Found no suitable resolves of 2 for swapencrypted", > "Debug: Facter: value for swapencrypted is still nil", > "Debug: Facter: value for network_br_int is still nil", > "Debug: Facter: value for network_br_isolated is still nil", > "Debug: Facter: value for network_br_tun is still nil", > "Debug: Facter: value for network_eth1 is still nil", > "Debug: Facter: value for network_eth2 is still nil", > "Debug: Facter: value for network_ovs_system is still nil", > "Debug: Facter: Found no suitable resolves of 1 for processor", > "Debug: Facter: value for processor is still nil", > "Debug: Facter: value for is_rsc is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_region", > "Debug: Facter: value for rsc_region is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_instance_id", > "Debug: Facter: value for rsc_instance_id is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_enforced", > "Debug: Facter: value for selinux_enforced is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_policyversion", > "Debug: Facter: value for selinux_policyversion is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_current_mode", > "Debug: Facter: value for selinux_current_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_mode", > "Debug: Facter: value for selinux_config_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_policy", > "Debug: Facter: value for selinux_config_policy is still nil", > "Debug: Facter: value for sshdsakey is still nil", > "Debug: Facter: value for sshfp_dsa is still nil", > "Debug: Facter: value for sshrsakey is still nil", > "Debug: Facter: value for sshfp_rsa is still nil", > "Debug: Facter: value for sshecdsakey is still nil", > "Debug: Facter: value for sshfp_ecdsa is still nil", > "Debug: Facter: value for sshed25519key is still nil", > "Debug: Facter: value for sshfp_ed25519 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for system32", > "Debug: Facter: value for system32 is still nil", > "Debug: Facter: value for vlans is still nil", > "Debug: Facter: Found no suitable resolves of 1 for xendomains", > "Debug: Facter: value for xendomains is still nil", > "Debug: Facter: value for zfs_version is still nil", > "Debug: Facter: Found no suitable resolves of 1 for zonename", > "Debug: Facter: value for zonename is still nil", > "Debug: Facter: value for zpool_version is still nil", > "Debug: Facter: value for collectd_version is still nil", > "Debug: Facter: value for mysql_version is still nil", > "Debug: Facter: value for mysqld_version is still nil", > "Debug: Facter: value for sensu_version is still nil", > "Debug: Facter: value for rabbitmq_nodename is still nil", > "Debug: Facter: value for erl_ssl_path is still nil", > "Debug: Facter: value for rabbitmq_version is still nil", > "Debug: Facter: value for netmask6_br_int is still nil", > "Debug: Facter: value for netmask6_br_tun is still nil", > "Debug: Facter: value for netmask6_ovs_system is still nil", > "Debug: Facter: value for nic_alias is still nil", > "Debug: Facter: value for docker_group_gid is still nil", > "Debug: Facter: value for ssh_server_version_full is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_major", > "Debug: Facter: value for ssh_server_version_major is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_release", > "Debug: Facter: value for ssh_server_version_release is still nil", > "Debug: Facter: value for ssh_client_version_full is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_client_version_major", > "Debug: Facter: value for ssh_client_version_major is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_client_version_release", > "Debug: Facter: value for ssh_client_version_release is still nil", > "Debug: Facter: value for java_version is still nil", > "Debug: Facter: value for java_major_version is still nil", > "Debug: Facter: value for java_default_home is still nil", > "Debug: Facter: value for java_libjvm_path is still nil", > "Debug: Facter: value for java_patch_level is still nil", > "Debug: Facter: Found no suitable resolves of 2 for staging_windir", > "Debug: Facter: value for staging_windir is still nil", > "Debug: Facter: value for redis_server_version is still nil", > "Debug: Facter: value for git_html_path is still nil", > "Debug: Facter: value for git_exec_path is still nil", > "Debug: Facter: value for git_version is still nil", > "Debug: Facter: value for sssd_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for archive_windir", > "Debug: Facter: value for archive_windir is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iptables_persistent_version", > "Debug: Facter: value for iptables_persistent_version is still nil", > "Debug: Facter: value for cassandrarelease is still nil", > "Debug: Facter: value for cassandraminorversion is still nil", > "Debug: Facter: value for cassandrapatchversion is still nil", > "Debug: Facter: value for cassandramajorversion is still nil", > "Debug: Facter: value for ovs_uuid is still nil", > "Debug: Facter: value for ovs_version is still nil", > "Debug: Puppet::Type::Package::ProviderSensu_gem: file /opt/sensu/embedded/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderTdagent: file /opt/td-agent/usr/sbin/td-agent-gem does not exist", > "Debug: Puppet::Type::Package::ProviderAix: file /usr/bin/lslpp does not exist", > "Debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not exist", > "Debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does not exist", > "Debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not exist", > "Debug: Puppet::Type::Package::ProviderDnf: file dnf does not exist", > "Debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist", > "Debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swinstall does not exist", > "Debug: Puppet::Type::Package::ProviderNim: file /usr/sbin/nimclient does not exist", > "Debug: Puppet::Type::Package::ProviderOpkg: file opkg does not exist", > "Debug: Puppet::Type::Package::ProviderPacman: file /usr/bin/pacman does not exist", > "Debug: Puppet::Type::Package::ProviderPkg: file /usr/bin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPkgin: file pkgin does not exist", > "Debug: Puppet::Type::Package::ProviderPkgng: file /usr/local/sbin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/emerge does not exist", > "Debug: Puppet::Type::Package::ProviderPorts: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPortupgrade: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPuppet_gem: file /opt/puppetlabs/puppet/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist", > "Debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not exist", > "Debug: Puppet::Type::Package::ProviderTdnf: file tdnf does not exist", > "Debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox does not exist", > "Debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist", > "Debug: Puppet::Type::Package::ProviderZypper: file /usr/bin/zypper does not exist", > "Debug: Facter: value for pe_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_major_version", > "Debug: Facter: value for pe_major_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_minor_version", > "Debug: Facter: value for pe_minor_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_patch_version", > "Debug: Facter: value for pe_patch_version is still nil", > "Debug: Puppet::Type::Service::ProviderNoop: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderInit: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does not exist", > "Debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d does not exist", > "Debug: Puppet::Type::Service::ProviderGentoo: file /sbin/rc-update does not exist", > "Debug: Puppet::Type::Service::ProviderLaunchd: file /bin/launchctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenbsd: file /usr/sbin/rcctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenrc: file /bin/rc-status does not exist", > "Debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist", > "Debug: Puppet::Type::Service::ProviderUpstart: 0 confines (of 4) were true", > "Debug: Facter: value for apache_version is still nil", > "Debug: Facter: value for ipa_hostname is still nil", > "Debug: Facter: value for libvirt_uuid is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/pacemaker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::pacemaker from tripleo/profile/base/pacemaker into production", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Debug: hiera(): Hiera JSON backend starting", > "Debug: hiera(): Looking up lookup_options in JSON backend", > "Debug: hiera(): Looking for data source docker", > "Debug: hiera(): Looking for data source heat_config_", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/heat_config_.json, skipping", > "Debug: hiera(): Looking for data source config_step", > "Debug: hiera(): Looking for data source controller_extraconfig", > "Debug: hiera(): Looking for data source extraconfig", > "Debug: hiera(): Looking for data source service_names", > "Debug: hiera(): Looking for data source service_configs", > "Debug: hiera(): Looking for data source controller", > "Debug: hiera(): Looking for data source bootstrap_node", > "Debug: hiera(): Looking for data source all_nodes", > "Debug: hiera(): Looking for data source vip_data", > "Debug: hiera(): Looking for data source net_ip_map", > "Debug: hiera(): Looking for data source RedHat", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/RedHat.json, skipping", > "Debug: hiera(): Looking for data source neutron_bigswitch_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_bigswitch_data.json, skipping", > "Debug: hiera(): Looking for data source neutron_cisco_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_cisco_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_n1kv_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_n1kv_data.json, skipping", > "Debug: hiera(): Looking for data source midonet_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/midonet_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_aci_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_aci_data.json, skipping", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_node_ips in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_authkey in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::encryption in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::enable_instanceha in JSON backend", > "Debug: hiera(): Looking up step in JSON backend", > "Debug: hiera(): Looking up pcs_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_node_ips in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker_cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::instanceha in JSON backend", > "Debug: hiera(): Looking up hacluster_pwd in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_fencing in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_node_names in JSON backend", > "Debug: hiera(): Looking up corosync_ipv6 in JSON backend", > "Debug: hiera(): Looking up corosync_token_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/init.pp' in environment production", > "Debug: Automatically imported pacemaker from pacemaker into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/params.pp' in environment production", > "Debug: Automatically imported pacemaker::params from pacemaker/params into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/install.pp' in environment production", > "Debug: Automatically imported pacemaker::install from pacemaker/install into production", > "Debug: hiera(): Looking up pacemaker::install::ensure in JSON backend", > "Debug: Resource package[pacemaker] was not determined to be defined", > "Debug: Create new resource package[pacemaker] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pcs] was not determined to be defined", > "Debug: Create new resource package[pcs] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[fence-agents-all] was not determined to be defined", > "Debug: Create new resource package[fence-agents-all] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pacemaker-libs] was not determined to be defined", > "Debug: Create new resource package[pacemaker-libs] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/service.pp' in environment production", > "Debug: Automatically imported pacemaker::service from pacemaker/service into production", > "Debug: hiera(): Looking up pacemaker::service::ensure in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasstatus in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasrestart in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/corosync.pp' in environment production", > "Debug: Automatically imported pacemaker::corosync from pacemaker/corosync into production", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_members_rrp in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_name in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::manage_fw in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::pcsd_debug in JSON backend", > "Debug: pcmk_nodes_added: []", > "Debug: template[inline]: Bound template variables for inline template in 0.00 seconds", > "Debug: template[inline]: Interpolated template inline template in 0.00 seconds", > "Debug: hiera(): Looking up docker_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/systemd/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/systemctl/daemon_reload.pp' in environment production", > "Debug: Automatically imported systemd::systemctl::daemon_reload from systemd/systemctl/daemon_reload into production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/unit_file.pp' in environment production", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/init.pp' in environment production", > "Debug: Automatically imported systemd::unit_file from systemd/unit_file into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/stonith.pp' in environment production", > "Debug: Automatically imported pacemaker::stonith from pacemaker/stonith into production", > "Debug: hiera(): Looking up pacemaker::stonith::try_sleep in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/property.pp' in environment production", > "Debug: Automatically imported pacemaker::property from pacemaker/property into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource_defaults.pp' in environment production", > "Debug: Automatically imported pacemaker::resource_defaults from pacemaker/resource_defaults into production", > "Debug: hiera(): Looking up pacemaker::resource_defaults::defaults in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::post_success_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::verify_on_create in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/cinder/volume_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::cinder::volume_bundle from tripleo/profile/pacemaker/cinder/volume_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::cinder_volume_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::docker_volumes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::docker_environment in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::step in JSON backend", > "Debug: hiera(): Looking up cinder_volume_short_bootstrap_node_name in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/cinder/volume.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::cinder::volume from tripleo/profile/base/cinder/volume into production", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_pure_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellsc_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellemc_unity_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellemc_vmax_iscsi_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellemc_vnx_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellemc_xtremio_iscsi_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_hpelefthand_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellps_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_iscsi_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_netapp_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_nfs_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_rbd_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_scaleio_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_vrts_hs_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_nvmeof_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_user_enabled_backends in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_rbd_client_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::step in JSON backend", > "Debug: hiera(): Looking up cinder_user_enabled_backends in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_user_name in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::cinder from tripleo/profile/base/cinder into production", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::cinder_enable_db_purge in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_proto in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_hosts in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_password in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_username in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_use_ssl in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_proto in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_hosts in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_password in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_username in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_use_ssl in JSON backend", > "Debug: hiera(): Looking up bootstrap_nodeid in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_node_names in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_password in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_port in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_user_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_use_ssl in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_node_names in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_password in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_port in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_user_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_use_ssl in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/init.pp' in environment production", > "Debug: Automatically imported cinder from cinder into production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/params.pp' in environment production", > "Debug: Automatically imported cinder::params from cinder/params into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/defaults.pp' in environment production", > "Debug: Automatically imported openstacklib::defaults from openstacklib/defaults into production", > "Debug: hiera(): Looking up cinder::database_connection in JSON backend", > "Debug: hiera(): Looking up cinder::database_idle_timeout in JSON backend", > "Debug: hiera(): Looking up cinder::database_min_pool_size in JSON backend", > "Debug: hiera(): Looking up cinder::database_max_pool_size in JSON backend", > "Debug: hiera(): Looking up cinder::database_max_retries in JSON backend", > "Debug: hiera(): Looking up cinder::database_retry_interval in JSON backend", > "Debug: hiera(): Looking up cinder::database_max_overflow in JSON backend", > "Debug: hiera(): Looking up cinder::rpc_response_timeout in JSON backend", > "Debug: hiera(): Looking up cinder::control_exchange in JSON backend", > "Debug: hiera(): Looking up cinder::rabbit_ha_queues in JSON backend", > "Debug: hiera(): Looking up cinder::rabbit_heartbeat_timeout_threshold in JSON backend", > "Debug: hiera(): Looking up cinder::rabbit_heartbeat_rate in JSON backend", > "Debug: hiera(): Looking up cinder::rabbit_use_ssl in JSON backend", > "Debug: hiera(): Looking up cinder::service_down_time in JSON backend", > "Debug: hiera(): Looking up cinder::report_interval in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_ssl_ca_certs in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_ssl_certfile in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_ssl_keyfile in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_ssl_version in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_reconnect_delay in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_failover_strategy in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_compression in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_durable_queues in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_server_request_prefix in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_broadcast_prefix in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_group_request_prefix in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_container_name in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_idle_timeout in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_trace in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_ssl_ca_file in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_ssl_cert_file in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_ssl_key_file in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_ssl_key_password in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_allow_insecure_clients in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_sasl_mechanisms in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_sasl_config_dir in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_sasl_config_name in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_username in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_password in JSON backend", > "Debug: hiera(): Looking up cinder::package_ensure in JSON backend", > "Debug: hiera(): Looking up cinder::api_paste_config in JSON backend", > "Debug: hiera(): Looking up cinder::use_syslog in JSON backend", > "Debug: hiera(): Looking up cinder::use_stderr in JSON backend", > "Debug: hiera(): Looking up cinder::log_facility in JSON backend", > "Debug: hiera(): Looking up cinder::log_dir in JSON backend", > "Debug: hiera(): Looking up cinder::debug in JSON backend", > "Debug: hiera(): Looking up cinder::storage_availability_zone in JSON backend", > "Debug: hiera(): Looking up cinder::default_availability_zone in JSON backend", > "Debug: hiera(): Looking up cinder::allow_availability_zone_fallback in JSON backend", > "Debug: hiera(): Looking up cinder::enable_v3_api in JSON backend", > "Debug: hiera(): Looking up cinder::lock_path in JSON backend", > "Debug: hiera(): Looking up cinder::image_conversion_dir in JSON backend", > "Debug: hiera(): Looking up cinder::host in JSON backend", > "Debug: hiera(): Looking up cinder::purge_config in JSON backend", > "Debug: hiera(): Looking up cinder::backend_host in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/deps.pp' in environment production", > "Debug: Automatically imported cinder::deps from cinder/deps into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/db.pp' in environment production", > "Debug: Automatically imported oslo::db from oslo/db into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/policy/base.pp' in environment production", > "Debug: Automatically imported openstacklib::policy::base from openstacklib/policy/base into production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/db.pp' in environment production", > "Debug: Automatically imported cinder::db from cinder/db into production", > "Debug: hiera(): Looking up cinder::db::database_db_max_retries in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_connection in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_idle_timeout in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_min_pool_size in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_max_pool_size in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_max_retries in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_retry_interval in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_max_overflow in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_pool_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/logging.pp' in environment production", > "Debug: Automatically imported cinder::logging from cinder/logging into production", > "Debug: hiera(): Looking up cinder::logging::use_syslog in JSON backend", > "Debug: hiera(): Looking up cinder::logging::use_json in JSON backend", > "Debug: hiera(): Looking up cinder::logging::use_journal in JSON backend", > "Debug: hiera(): Looking up cinder::logging::use_stderr in JSON backend", > "Debug: hiera(): Looking up cinder::logging::log_facility in JSON backend", > "Debug: hiera(): Looking up cinder::logging::log_dir in JSON backend", > "Debug: hiera(): Looking up cinder::logging::debug in JSON backend", > "Debug: hiera(): Looking up cinder::logging::logging_context_format_string in JSON backend", > "Debug: hiera(): Looking up cinder::logging::logging_default_format_string in JSON backend", > "Debug: hiera(): Looking up cinder::logging::logging_debug_format_suffix in JSON backend", > "Debug: hiera(): Looking up cinder::logging::logging_exception_prefix in JSON backend", > "Debug: hiera(): Looking up cinder::logging::log_config_append in JSON backend", > "Debug: hiera(): Looking up cinder::logging::default_log_levels in JSON backend", > "Debug: hiera(): Looking up cinder::logging::publish_errors in JSON backend", > "Debug: hiera(): Looking up cinder::logging::fatal_deprecations in JSON backend", > "Debug: hiera(): Looking up cinder::logging::instance_format in JSON backend", > "Debug: hiera(): Looking up cinder::logging::instance_uuid_format in JSON backend", > "Debug: hiera(): Looking up cinder::logging::log_date_format in JSON backend", > "Debug: importing '/etc/puppet/modules/oslo/manifests/log.pp' in environment production", > "Debug: Automatically imported oslo::log from oslo/log into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/messaging/rabbit.pp' in environment production", > "Debug: Automatically imported oslo::messaging::rabbit from oslo/messaging/rabbit into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/messaging/amqp.pp' in environment production", > "Debug: Automatically imported oslo::messaging::amqp from oslo/messaging/amqp into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/messaging/default.pp' in environment production", > "Debug: Automatically imported oslo::messaging::default from oslo/messaging/default into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/concurrency.pp' in environment production", > "Debug: Automatically imported oslo::concurrency from oslo/concurrency into production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/ceilometer.pp' in environment production", > "Debug: Automatically imported cinder::ceilometer from cinder/ceilometer into production", > "Debug: hiera(): Looking up cinder::ceilometer::notification_driver in JSON backend", > "Debug: hiera(): Looking up cinder::ceilometer::notification_topics in JSON backend", > "Debug: importing '/etc/puppet/modules/oslo/manifests/messaging/notifications.pp' in environment production", > "Debug: Automatically imported oslo::messaging::notifications from oslo/messaging/notifications into production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/config.pp' in environment production", > "Debug: Automatically imported cinder::config from cinder/config into production", > "Debug: hiera(): Looking up cinder::config::cinder_config in JSON backend", > "Debug: hiera(): Looking up cinder::config::api_paste_ini_config in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/glance.pp' in environment production", > "Debug: Automatically imported cinder::glance from cinder/glance into production", > "Debug: hiera(): Looking up cinder::glance::glance_api_servers in JSON backend", > "Debug: hiera(): Looking up cinder::glance::glance_num_retries in JSON backend", > "Debug: hiera(): Looking up cinder::glance::glance_api_insecure in JSON backend", > "Debug: hiera(): Looking up cinder::glance::glance_api_ssl_compression in JSON backend", > "Debug: hiera(): Looking up cinder::glance::glance_request_timeout in JSON backend", > "Debug: hiera(): Looking up cinder::glance::glance_api_version in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/cron/db_purge.pp' in environment production", > "Debug: Automatically imported cinder::cron::db_purge from cinder/cron/db_purge into production", > "Debug: hiera(): Looking up cinder::cron::db_purge::minute in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::hour in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::monthday in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::month in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::weekday in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::user in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::age in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::destination in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/volume.pp' in environment production", > "Debug: Automatically imported cinder::volume from cinder/volume into production", > "Debug: hiera(): Looking up cinder::volume::package_ensure in JSON backend", > "Debug: hiera(): Looking up cinder::volume::enabled in JSON backend", > "Debug: hiera(): Looking up cinder::volume::manage_service in JSON backend", > "Debug: hiera(): Looking up cinder::volume::volume_clear in JSON backend", > "Debug: hiera(): Looking up cinder::volume::volume_clear_size in JSON backend", > "Debug: hiera(): Looking up cinder::volume::volume_clear_ionice in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/cinder/volume/rbd.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::cinder::volume::rbd from tripleo/profile/base/cinder/volume/rbd into production", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::backend_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_backend_host in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_ceph_conf in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_pool_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_extra_pools in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_secret_uuid in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::step in JSON backend", > "Debug: hiera(): Looking up cinder::backend::rbd::volume_backend_name in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/backend/rbd.pp' in environment production", > "Debug: Automatically imported cinder::backend::rbd from cinder/backend/rbd into production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/backends.pp' in environment production", > "Debug: Automatically imported cinder::backends from cinder/backends into production", > "Debug: hiera(): Looking up cinder::backends::backend_host in JSON backend", > "Debug: hiera(): Looking up cinder_volume_short_node_names in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/bundle.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::bundle from pacemaker/resource/bundle into production", > "Debug: hiera(): Looking up systemd::service_limits in JSON backend", > "Debug: hiera(): Looking up systemd::manage_resolved in JSON backend", > "Debug: hiera(): Looking up systemd::resolved_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_networkd in JSON backend", > "Debug: hiera(): Looking up systemd::networkd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_timesyncd in JSON backend", > "Debug: hiera(): Looking up systemd::timesyncd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::ntp_server in JSON backend", > "Debug: hiera(): Looking up systemd::fallback_ntp_server in JSON backend", > "Debug: importing '/etc/puppet/modules/oslo/manifests/params.pp' in environment production", > "Debug: Automatically imported oslo::params from oslo/params into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/bindings.pp' in environment production", > "Debug: Automatically imported mysql::bindings from mysql/bindings into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/params.pp' in environment production", > "Debug: Automatically imported mysql::params from mysql/params into production", > "Debug: hiera(): Looking up mysql::bindings::install_options in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::java_enable in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::perl_enable in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::php_enable in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::python_enable in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::ruby_enable in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::client_dev in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::daemon_dev in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::java_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::java_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::java_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::perl_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::perl_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::perl_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::php_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::php_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::php_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::python_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::python_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::python_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::ruby_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::ruby_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::ruby_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::client_dev_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::client_dev_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::client_dev_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::daemon_dev_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::daemon_dev_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::daemon_dev_package_provider in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/bindings/python.pp' in environment production", > "Debug: Automatically imported mysql::bindings::python from mysql/bindings/python into production", > "Debug: Resource package[ceph-common] was not determined to be defined", > "Debug: Create new resource package[ceph-common] with params {\"ensure\"=>\"present\", \"name\"=>\"ceph-common\", \"tag\"=>\"cinder-support-package\"}", > "Debug: Resource file[/etc/sysconfig/openstack-cinder-volume] was not determined to be defined", > "Debug: Create new resource file[/etc/sysconfig/openstack-cinder-volume] with params {\"ensure\"=>\"present\"}", > "Debug: hiera(): Looking up pacemaker::resource::bundle::deep_compare in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::bundle::update_settle_secs in JSON backend", > "Debug: Adding relationship from Service[pcsd] to Exec[auth-successful-across-all-nodes] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[corosync] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[pacemaker] with 'before'", > "Debug: Adding relationship from Service[corosync] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker-authkey] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property--stonith-enabled] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-cinder-volume-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[openstack-cinder-volume] with 'before'", > "Debug: Adding relationship from Class[Pacemaker] to Class[Pacemaker::Corosync] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/resource-agents-deps.target.wants] to Systemd::Unit_file[docker.service] with 'before'", > "Debug: Adding relationship from Systemd::Unit_file[docker.service] to Class[Systemd::Systemctl::Daemon_reload] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::begin] to Package[cinder] with 'before'", > "Debug: Adding relationship from Package[cinder] to Anchor[cinder::install::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/report_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/service_down_time] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/api_paste_config] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/storage_availability_zone] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/default_availability_zone] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/allow_availability_zone_fallback] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/image_conversion_dir] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/host] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/enable_v3_api] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_api_servers] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_num_retries] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_api_insecure] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_api_ssl_compression] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_request_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/volume_clear] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/volume_clear_size] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/volume_clear_ionice] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/enabled_backends] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/backend_host] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/sqlite_synchronous] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/backend] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/connection] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/slave_connection] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/mysql_sql_mode] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/idle_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/min_pool_size] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/max_pool_size] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/max_retries] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/retry_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/max_overflow] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/connection_debug] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/connection_trace] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/pool_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/use_db_reconnect] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/db_retry_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/db_inc_retry_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/db_max_retry_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/db_max_retries] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/use_tpool] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/debug] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/log_config_append] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/log_date_format] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/log_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/log_dir] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/watch_log_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/use_syslog] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/use_journal] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/use_json] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/syslog_log_facility] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/use_stderr] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/logging_context_format_string] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/logging_default_format_string] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/logging_debug_format_suffix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/logging_exception_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/logging_user_identity_format] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/default_log_levels] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/publish_errors] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/instance_format] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/instance_uuid_format] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/fatal_deprecations] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/amqp_durable_queues] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/heartbeat_rate] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/kombu_compression] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_interval_max] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_login_method] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_password] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/ssl] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_userid] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_hosts] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_port] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_host] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/ssl_ca_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/ssl_cert_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/ssl_key_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/ssl_version] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/addressing_mode] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/server_request_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/broadcast_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/group_request_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/rpc_address_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/notify_address_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/multicast_address] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/unicast_address] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/anycast_address] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/default_notification_exchange] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/default_rpc_exchange] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/pre_settled] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/container_name] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/idle_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/trace] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/ssl] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/ssl_ca_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/ssl_cert_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/ssl_key_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/ssl_key_password] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/allow_insecure_clients] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/sasl_mechanisms] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/sasl_config_dir] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/sasl_config_name] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/sasl_default_realm] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/username] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/password] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/default_send_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/default_notify_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/rpc_response_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/transport_url] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/control_exchange] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_concurrency/disable_process_locking] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_concurrency/lock_path] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_notifications/driver] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_notifications/transport_url] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_notifications/topics] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/volume_backend_name] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/volume_driver] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_ceph_conf] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_user] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_pool] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_max_clone_depth] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_secret_uuid] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rados_connect_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rados_connection_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rados_connection_retries] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_store_chunk_size] with 'before'", > "Debug: Adding relationship from Cinder_config[DEFAULT/report_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/service_down_time] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/api_paste_config] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/storage_availability_zone] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/default_availability_zone] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/allow_availability_zone_fallback] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/image_conversion_dir] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/host] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/enable_v3_api] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_api_servers] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_num_retries] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_api_insecure] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_api_ssl_compression] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_request_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/volume_clear] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/volume_clear_size] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/volume_clear_ionice] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/enabled_backends] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/backend_host] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/sqlite_synchronous] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/backend] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/connection] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/slave_connection] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/mysql_sql_mode] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/idle_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/min_pool_size] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/max_pool_size] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/max_retries] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/retry_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/max_overflow] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/connection_debug] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/connection_trace] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/pool_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/use_db_reconnect] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/db_retry_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/db_inc_retry_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/db_max_retry_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/db_max_retries] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/use_tpool] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/debug] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/log_config_append] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/log_date_format] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/log_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/log_dir] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/watch_log_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/use_syslog] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/use_journal] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/use_json] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/syslog_log_facility] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/use_stderr] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/logging_context_format_string] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/logging_default_format_string] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/logging_debug_format_suffix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/logging_exception_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/logging_user_identity_format] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/default_log_levels] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/publish_errors] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/instance_format] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/instance_uuid_format] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/fatal_deprecations] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/amqp_durable_queues] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/heartbeat_rate] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/kombu_compression] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_interval_max] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_login_method] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_password] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/ssl] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_userid] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_hosts] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_port] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_host] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/ssl_ca_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/ssl_cert_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/ssl_key_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/ssl_version] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/addressing_mode] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/server_request_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/broadcast_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/group_request_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/rpc_address_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/notify_address_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/multicast_address] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/unicast_address] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/anycast_address] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/default_notification_exchange] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/default_rpc_exchange] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/pre_settled] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/container_name] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/idle_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/trace] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/ssl] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/ssl_ca_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/ssl_cert_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/ssl_key_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/ssl_key_password] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/allow_insecure_clients] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/sasl_mechanisms] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/sasl_config_dir] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/sasl_config_name] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/sasl_default_realm] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/username] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/password] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/default_send_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/default_notify_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/rpc_response_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/transport_url] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/control_exchange] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_concurrency/disable_process_locking] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_concurrency/lock_path] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_notifications/driver] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_notifications/transport_url] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_notifications/topics] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/volume_backend_name] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/volume_driver] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_ceph_conf] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_user] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_pool] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_max_clone_depth] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_secret_uuid] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rados_connect_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rados_connection_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rados_connection_retries] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_store_chunk_size] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::begin] to Anchor[cinder::db::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::end] to Anchor[cinder::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::dbsync::begin] to Anchor[cinder::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::dbsync::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::service::begin] to Service[cinder-volume] with 'notify'", > "Debug: Adding relationship from Service[cinder-volume] to Anchor[cinder::service::end] with 'notify'", > "Debug: Adding relationship from Oslo::Db[cinder_config] to Anchor[cinder::dbsync::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::install::begin] to Package[ceph-common] with 'before'", > "Debug: Adding relationship from Package[ceph-common] to Anchor[cinder::install::end] with 'before'", > "Debug: Adding relationship from Package[cinder] to Anchor[cinder::service::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Exec[exec-setfacl-openstack-cinder] to Exec[exec-setfacl-openstack-cinder-mask] with 'before'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.61 seconds", > "Debug: puppet-pacemaker: initialize()", > "Debug: Creating default schedules", > "Info: Applying configuration version '1538485036'", > "Debug: /Stage[main]/Pacemaker/before: subscribes to Class[Pacemaker::Corosync]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Exec[auth-successful-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/before: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/notify: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/notify: subscribes to Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/require: subscribes to User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/require: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[corosync]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[pacemaker]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property--stonith-enabled]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-cinder-volume-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[openstack-cinder-volume]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/before: subscribes to Systemd::Unit_file[docker.service]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/before: subscribes to Class[Pacemaker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]/before: subscribes to Package[cinder]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]/before: subscribes to Package[ceph-common]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/before: subscribes to Anchor[cinder::config::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/report_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/service_down_time]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/api_paste_config]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/storage_availability_zone]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/default_availability_zone]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/allow_availability_zone_fallback]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/image_conversion_dir]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/host]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/enable_v3_api]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_api_servers]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_num_retries]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_api_insecure]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_api_ssl_compression]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_request_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/volume_clear]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/volume_clear_size]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/volume_clear_ionice]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/enabled_backends]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/backend_host]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/sqlite_synchronous]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/backend]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/connection]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/slave_connection]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/mysql_sql_mode]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/idle_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/min_pool_size]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/max_pool_size]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/max_retries]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/retry_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/max_overflow]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/connection_debug]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/connection_trace]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/pool_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/use_db_reconnect]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/db_retry_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/db_inc_retry_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/db_max_retry_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/db_max_retries]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/use_tpool]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/debug]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/log_config_append]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/log_date_format]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/log_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/log_dir]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/watch_log_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/use_syslog]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/use_journal]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/use_json]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/syslog_log_facility]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/use_stderr]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/logging_context_format_string]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/logging_default_format_string]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/logging_debug_format_suffix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/logging_exception_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/logging_user_identity_format]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/default_log_levels]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/publish_errors]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/instance_format]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/instance_uuid_format]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/fatal_deprecations]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/amqp_durable_queues]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/heartbeat_rate]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/kombu_compression]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_interval_max]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_login_method]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_password]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/ssl]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_userid]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_hosts]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_port]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_host]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/ssl_ca_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/ssl_cert_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/ssl_key_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/ssl_version]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/addressing_mode]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/server_request_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/broadcast_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/group_request_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/rpc_address_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/notify_address_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/multicast_address]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/unicast_address]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/anycast_address]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/default_notification_exchange]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/default_rpc_exchange]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/pre_settled]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/container_name]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/idle_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/trace]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/ssl]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/ssl_ca_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/ssl_cert_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/ssl_key_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/ssl_key_password]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/allow_insecure_clients]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/sasl_mechanisms]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/sasl_config_dir]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/sasl_config_name]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/sasl_default_realm]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/username]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/password]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/default_send_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/default_notify_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/rpc_response_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/transport_url]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/control_exchange]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_concurrency/disable_process_locking]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_concurrency/lock_path]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_notifications/driver]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_notifications/transport_url]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_notifications/topics]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/volume_backend_name]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/volume_driver]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_ceph_conf]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_user]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_pool]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_max_clone_depth]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_secret_uuid]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rados_connect_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rados_connection_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rados_connection_retries]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_store_chunk_size]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/before: subscribes to Anchor[cinder::db::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]/before: subscribes to Anchor[cinder::db::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]/notify: subscribes to Anchor[cinder::dbsync::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]/before: subscribes to Anchor[cinder::dbsync::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]/notify: subscribes to Service[cinder-volume]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/require: subscribes to Class[Mysql::Bindings]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/require: subscribes to Class[Mysql::Bindings::Python]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/before: subscribes to Anchor[cinder::dbsync::begin]", > "Debug: /Stage[main]/Cinder/Package[cinder]/notify: subscribes to Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Package[cinder]/notify: subscribes to Anchor[cinder::service::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/report_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/service_down_time]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/allow_availability_zone_fallback]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/image_conversion_dir]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/host]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_num_retries]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_insecure]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_ssl_compression]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_request_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/require: subscribes to Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Volume/Service[cinder-volume]/notify: subscribes to Anchor[cinder::service::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_size]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_ionice]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume/Exec[exec-setfacl-openstack-cinder]/before: subscribes to Exec[exec-setfacl-openstack-cinder-mask]", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Property[cinder-volume-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[openstack-cinder-volume]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/sqlite_synchronous]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/backend]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/slave_connection]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/mysql_sql_mode]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/idle_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/min_pool_size]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_pool_size]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/retry_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_overflow]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_debug]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_trace]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/pool_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_db_reconnect]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_retry_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_inc_retry_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retry_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_tpool]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_config_append]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_date_format]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/watch_log_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_syslog]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_journal]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_json]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/syslog_log_facility]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_stderr]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_context_format_string]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_default_format_string]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_debug_format_suffix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_exception_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_user_identity_format]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/default_log_levels]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/publish_errors]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_format]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_uuid_format]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/fatal_deprecations]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/amqp_durable_queues]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_rate]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_compression]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_interval_max]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_login_method]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_password]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_userid]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_hosts]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_port]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_host]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_ca_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_cert_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_key_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_version]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/addressing_mode]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/server_request_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/broadcast_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/group_request_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/rpc_address_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/notify_address_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/multicast_address]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/unicast_address]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/anycast_address]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notification_exchange]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_rpc_exchange]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/pre_settled]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/container_name]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/idle_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/trace]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_ca_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_cert_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_password]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/allow_insecure_clients]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_mechanisms]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_dir]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_name]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_default_realm]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/username]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/password]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_send_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notify_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/rpc_response_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/disable_process_locking]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/topics]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_max_clone_depth]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connect_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_retries]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_store_chunk_size]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Package[ceph-common]/before: subscribes to Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/report_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/service_down_time]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/allow_availability_zone_fallback]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/image_conversion_dir]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/host]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_num_retries]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_insecure]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_ssl_compression]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_request_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_size]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_ionice]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Adding autorequire relationship with File[/etc/systemd/system/resource-agents-deps.target.wants]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/sqlite_synchronous]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/backend]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/slave_connection]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/mysql_sql_mode]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/idle_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/min_pool_size]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_pool_size]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/retry_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_overflow]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_debug]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_trace]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/pool_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_db_reconnect]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_retry_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_inc_retry_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retry_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_tpool]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_config_append]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_date_format]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/watch_log_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_syslog]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_journal]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_json]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/syslog_log_facility]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_stderr]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_context_format_string]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_default_format_string]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_debug_format_suffix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_exception_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_user_identity_format]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/default_log_levels]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/publish_errors]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_format]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_uuid_format]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/fatal_deprecations]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/amqp_durable_queues]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_rate]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_compression]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_interval_max]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_login_method]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_password]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_userid]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_hosts]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_port]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_host]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_ca_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_cert_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_key_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_version]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/addressing_mode]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/server_request_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/broadcast_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/group_request_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/rpc_address_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/notify_address_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/multicast_address]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/unicast_address]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/anycast_address]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notification_exchange]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_rpc_exchange]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/pre_settled]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/container_name]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/idle_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/trace]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_ca_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_cert_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_password]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/allow_insecure_clients]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_mechanisms]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_dir]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_name]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_default_realm]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/username]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/password]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_send_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notify_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/rpc_response_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/disable_process_locking]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/topics]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_max_clone_depth]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connect_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_retries]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_store_chunk_size]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]: Adding autorequire relationship with File[/etc/sysconfig/openstack-cinder-volume]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Stage[main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Settings]: Resource is being skipped, unscheduling all events", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Install]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching yum resources for package", > "Debug: Executing '/usr/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n''", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Service]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Systemd::Unit_file[docker.service]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Stonith]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Pacemaker::Property[Disable STONITH]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Resource_defaults]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Pacemaker::Cinder::Volume_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Pacemaker::Cinder::Volume_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Cinder::Volume]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Base::Cinder::Volume]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Base::Cinder]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder]: Resource is being skipped, unscheduling all events", > "Debug: Class[Openstacklib::Defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Openstacklib::Defaults]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Db]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Db]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Logging]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Logging]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Log[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Log[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Package[cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Package[cinder]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Resources[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Resources[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Messaging::Rabbit[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Messaging::Rabbit[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Messaging::Amqp[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Messaging::Amqp[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Messaging::Default[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Messaging::Default[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Concurrency[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Concurrency[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Ceilometer]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Ceilometer]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Messaging::Notifications[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Messaging::Notifications[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Config]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Glance]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Glance]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Cron::Db_purge]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Cron::Db_purge]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Volume]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Volume]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Cinder::Volume::Rbd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Base::Cinder::Volume::Rbd]: Resource is being skipped, unscheduling all events", > "Debug: Cinder::Backend::Rbd[tripleo_ceph]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Cinder::Backend::Rbd[tripleo_ceph]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume/Exec[exec-setfacl-openstack-cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume/Exec[exec-setfacl-openstack-cinder]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume/Exec[exec-setfacl-openstack-cinder-mask]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume/Exec[exec-setfacl-openstack-cinder-mask]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Backends]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Backends]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[cinder-volume-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Pacemaker::Property[cinder-volume-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Systemd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/mode: Not managing symlink mode", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: The container Systemd::Unit_file[docker.service] will propagate my refresh event", > "Info: Systemd::Unit_file[docker.service]: Unscheduling all events on Systemd::Unit_file[docker.service]", > "Info: Class[Tripleo::Profile::Base::Pacemaker]: Unscheduling all events on Class[Tripleo::Profile::Base::Pacemaker]", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}85274b5d58af3572868d4ef10722b50f'", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Resource is being skipped, unscheduling all events", > "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Resource is being skipped, unscheduling all events", > "Info: Class[Systemd::Systemctl::Daemon_reload]: Unscheduling all events on Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-xn0uc3 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-xn0uc3 property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: property exists: property show | grep stonith-enabled | grep false > /dev/null 2>&1 -> ", > "Debug: Class[Oslo::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Oslo::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Mysql::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Bindings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Mysql::Bindings]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Bindings::Python]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Mysql::Bindings::Python]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Bindings::Python/Package[python-mysqldb]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Mysql::Bindings::Python/Package[python-mysqldb]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Db[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Db[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Package[ceph-common]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Package[ceph-common]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/report_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/report_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/service_down_time]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/service_down_time]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/allow_availability_zone_fallback]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/allow_availability_zone_fallback]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/image_conversion_dir]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/image_conversion_dir]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/host]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/host]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_num_retries]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_num_retries]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_insecure]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_insecure]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_ssl_compression]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_ssl_compression]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_request_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_request_timeout]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching crontab resources for cron", > "Debug: looking for crontabs in /var/spool/cron", > "Debug: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_size]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_size]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_ionice]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_ionice]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/sqlite_synchronous]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/sqlite_synchronous]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/backend]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/backend]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/slave_connection]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/slave_connection]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/mysql_sql_mode]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/mysql_sql_mode]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/idle_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/idle_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/min_pool_size]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/min_pool_size]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_pool_size]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_pool_size]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/retry_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/retry_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_overflow]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_overflow]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_debug]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_debug]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_trace]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_trace]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/pool_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/pool_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_db_reconnect]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_db_reconnect]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_retry_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_retry_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_inc_retry_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_inc_retry_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retry_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retry_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_tpool]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_tpool]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_config_append]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_config_append]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_date_format]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_date_format]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/watch_log_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/watch_log_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_syslog]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_syslog]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_journal]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_journal]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_json]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_json]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/syslog_log_facility]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/syslog_log_facility]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_stderr]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_stderr]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_context_format_string]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_context_format_string]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_default_format_string]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_default_format_string]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_debug_format_suffix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_debug_format_suffix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_exception_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_exception_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_user_identity_format]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_user_identity_format]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/default_log_levels]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/default_log_levels]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/publish_errors]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/publish_errors]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_format]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_format]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_uuid_format]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_uuid_format]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/fatal_deprecations]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/fatal_deprecations]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/amqp_durable_queues]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/amqp_durable_queues]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_rate]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_rate]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_compression]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_compression]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_interval_max]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_interval_max]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_login_method]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_login_method]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_password]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_password]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_userid]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_userid]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_hosts]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_hosts]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_port]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_port]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_host]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_host]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_ca_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_ca_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_cert_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_cert_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_key_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_key_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_version]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_version]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/addressing_mode]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/addressing_mode]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/server_request_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/server_request_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/broadcast_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/broadcast_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/group_request_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/group_request_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/rpc_address_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/rpc_address_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/notify_address_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/notify_address_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/multicast_address]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/multicast_address]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/unicast_address]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/unicast_address]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/anycast_address]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/anycast_address]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notification_exchange]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notification_exchange]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_rpc_exchange]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_rpc_exchange]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/pre_settled]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/pre_settled]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/container_name]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/container_name]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/idle_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/idle_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/trace]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/trace]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_ca_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_ca_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_cert_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_cert_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_password]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_password]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/allow_insecure_clients]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/allow_insecure_clients]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_mechanisms]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_mechanisms]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_dir]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_dir]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_name]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_name]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_default_realm]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_default_realm]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/username]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/username]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/password]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/password]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_send_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_send_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notify_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notify_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/rpc_response_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/rpc_response_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/disable_process_locking]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/disable_process_locking]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/topics]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/topics]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_max_clone_depth]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_max_clone_depth]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connect_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connect_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_retries]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_retries]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_store_chunk_size]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_store_chunk_size]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]: The container Cinder::Backend::Rbd[tripleo_ceph] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]: Scheduling refresh of Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]: The container Cinder::Backend::Rbd[tripleo_ceph] will propagate my refresh event", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Resource is being skipped, unscheduling all events", > "Info: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Unscheduling all events on Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Volume/Service[cinder-volume]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Volume/Service[cinder-volume]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Resource is being skipped, unscheduling all events", > "Info: Cinder::Backend::Rbd[tripleo_ceph]: Unscheduling all events on Cinder::Backend::Rbd[tripleo_ceph]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-pkyu63 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-pkyu63 property show | grep cinder-volume-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep cinder-volume-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1kmmg0e returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1kmmg0e property set --node controller-0 cinder-volume-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1kmmg0e diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1kmmg0e.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 cinder-volume-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Property[cinder-volume-role-controller-0]/Pcmk_property[property-controller-0-cinder-volume-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Property[cinder-volume-role-controller-0]/Pcmk_property[property-controller-0-cinder-volume-role]: The container Pacemaker::Property[cinder-volume-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[cinder-volume-role-controller-0]: Unscheduling all events on Pacemaker::Property[cinder-volume-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[openstack-cinder-volume]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Pacemaker::Resource::Bundle[openstack-cinder-volume]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-onnk86 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-onnk86 constraint list | grep location-openstack-cinder-volume > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ajji6u returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1ajji6u resource show openstack-cinder-volume > /dev/null 2>&1", > "Debug: Exists: bundle openstack-cinder-volume exists 1 location exists 1 deep_compare: true", > "Debug: Create: resource exists 1 location exists 1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1bziqs5 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1bziqs5 resource bundle create openstack-cinder-volume container docker image=192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest replicas=1 options=\"--ipc=host --privileged=true --user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=cinder-volume-etc-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=cinder-volume-etc-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=cinder-volume-etc-pki-ca-trust-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=cinder-volume-etc-pki-ca-trust-source-anchors source-dir=/etc/pki/ca-trust/source/anchors target-dir=/etc/pki/ca-trust/source/anchors options=ro storage-map id=cinder-volume-etc-pki-tls-certs-ca-bundle.crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=cinder-volume-etc-pki-tls-certs-ca-bundle.trust.crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=cinder-volume-etc-pki-tls-cert.pem source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=cinder-volume-dev-log source-dir=/dev/log target-dir=/dev/log options=rw storage-map id=cinder-volume-etc-ssh-ssh_known_hosts source-dir=/etc/ssh/ssh_known_hosts target-dir=/etc/ssh/ssh_known_hosts options=ro storage-map id=cinder-volume-etc-puppet source-dir=/etc/puppet target-dir=/etc/puppet options=ro storage-map id=cinder-volume-var-lib-kolla-config_files-cinder_volume.json source-dir=/var/lib/kolla/config_files/cinder_volume.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=cinder-volume-var-lib-config-data-puppet-generated-cinder- source-dir=/var/lib/config-data/puppet-generated/cinder/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=cinder-volume-etc-iscsi source-dir=/etc/iscsi target-dir=/var/lib/kolla/config_files/src-iscsid options=ro storage-map id=cinder-volume-etc-ceph source-dir=/etc/ceph target-dir=/var/lib/kolla/config_files/src-ceph options=ro storage-map id=cinder-volume-lib-modules source-dir=/lib/modules target-dir=/lib/modules options=ro storage-map id=cinder-volume-dev- source-dir=/dev/ target-dir=/dev/ options=rw storage-map id=cinder-volume-run- source-dir=/run/ target-dir=/run/ options=rw storage-map id=cinder-volume-sys source-dir=/sys target-dir=/sys options=rw storage-map id=cinder-volume-var-lib-cinder source-dir=/var/lib/cinder target-dir=/var/lib/cinder options=rw storage-map id=cinder-volume-var-log-containers-cinder source-dir=/var/log/containers/cinder target-dir=/var/log/cinder options=rw --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1bziqs5 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1bziqs5.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location openstack-cinder-volume rule resource-discovery=exclusive score=0 cinder-volume-role eq true", > "Debug: location_rule_create: constraint location openstack-cinder-volume rule resource-discovery=exclusive score=0 cinder-volume-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-c7mzaf returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-c7mzaf constraint location openstack-cinder-volume rule resource-discovery=exclusive score=0 cinder-volume-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-c7mzaf diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-c7mzaf.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1agabpw returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1agabpw resource enable openstack-cinder-volume", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1agabpw diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1agabpw.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Resource::Bundle[openstack-cinder-volume]/Pcmk_bundle[openstack-cinder-volume]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Resource::Bundle[openstack-cinder-volume]/Pcmk_bundle[openstack-cinder-volume]: The container Pacemaker::Resource::Bundle[openstack-cinder-volume] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[openstack-cinder-volume]: Unscheduling all events on Pacemaker::Resource::Bundle[openstack-cinder-volume]", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[puppet]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[hourly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[daily]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[weekly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[monthly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[never]: Resource is being skipped, unscheduling all events", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Filebucket[puppet]: Resource is being skipped, unscheduling all events", > "Debug: Finishing transaction 27320520", > "Debug: Storing state", > "Info: Creating state file /var/lib/puppet/state/state.yaml", > "Debug: Stored state in 0.00 seconds", > "Notice: Applied catalog in 33.86 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Skipped: 174", > " Total: 184", > " Out of sync: 8", > " Changed: 8", > "Time:", > " File line: 0.00", > " File: 0.01", > " Pcmk property: 10.91", > " Last run: 1538485073", > " Config retrieval: 2.98", > " Pcmk bundle: 22.33", > " Total: 36.22", > "Version:", > " Config: 1538485036", > " Puppet: 4.8.2", > "Debug: Applying settings catalog for sections main, reporting, metrics", > "Debug: Using settings: adding file resource 'confdir': 'File[/etc/puppet]{:path=>\"/etc/puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'vardir': 'File[/var/lib/puppet]{:path=>\"/var/lib/puppet\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'logdir': 'File[/var/log/puppet]{:path=>\"/var/log/puppet\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'statedir': 'File[/var/lib/puppet/state]{:path=>\"/var/lib/puppet/state\", :mode=>\"1755\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'rundir': 'File[/var/run/puppet]{:path=>\"/var/run/puppet\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'libdir': 'File[/var/lib/puppet/lib]{:path=>\"/var/lib/puppet/lib\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'hiera_config': 'File[/etc/puppet/hiera.yaml]{:path=>\"/etc/puppet/hiera.yaml\", :ensure=>:file, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'preview_outputdir': 'File[/var/lib/puppet/preview]{:path=>\"/var/lib/puppet/preview\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'certdir': 'File[/etc/puppet/ssl/certs]{:path=>\"/etc/puppet/ssl/certs\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'ssldir': 'File[/etc/puppet/ssl]{:path=>\"/etc/puppet/ssl\", :mode=>\"771\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'publickeydir': 'File[/etc/puppet/ssl/public_keys]{:path=>\"/etc/puppet/ssl/public_keys\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'requestdir': 'File[/etc/puppet/ssl/certificate_requests]{:path=>\"/etc/puppet/ssl/certificate_requests\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatekeydir': 'File[/etc/puppet/ssl/private_keys]{:path=>\"/etc/puppet/ssl/private_keys\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatedir': 'File[/etc/puppet/ssl/private]{:path=>\"/etc/puppet/ssl/private\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'pluginfactdest': 'File[/var/lib/puppet/facts.d]{:path=>\"/var/lib/puppet/facts.d\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: /File[/var/lib/puppet/state]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/var/lib/puppet/lib]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/hiera.yaml]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/var/lib/puppet/preview]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/ssl/certs]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/etc/puppet/ssl/public_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/certificate_requests]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/var/lib/puppet/facts.d]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: Finishing transaction 24527760", > "Debug: Received report to process from controller-0.localdomain", > "Debug: Processing report from controller-0.localdomain with processor Puppet::Reports::Store", > "stderr: + STEP=5", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle'", > "+ EXTRA_ARGS='--debug --verbose'", > "+ '[' -d /tmp/puppet-etc ']'", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ echo '{\"step\": 5}'", > "+ export FACTER_uuid=docker", > "+ FACTER_uuid=docker", > "+ set +e", > "+ puppet apply --debug --verbose --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle'", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/volume.pp\", 44]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder/volume.pp\", 117]", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rc=2", > "+ set -e", > "+ set +ux", > "stdout: 83e83a6873c95d1bd775c47f36ffda18ffe68e32d7108d2b67bfc77708991a7f", > "stdout: ba47779885babe3eb70900597a3bed54d0dea94ee8bfe07e83140ec484e0f63f", > "stdout: e9744013e7c4dd61f322541882e3732783890454892c740ce04d14504cd20e69", > "stdout: (cellv2) Running cell_v2 host discovery", > "(cellv2) Waiting 600 seconds for hosts to register", > "(cellv2) compute node compute-0.localdomain has registered", > "(cellv2) All nodes registered", > "(cellv2) Running host discovery...", > "Found 2 cell mappings.", > "Skipping cell0 since it does not contain hosts.", > "Getting computes from cell 'default': 8d8ba8aa-3e63-466c-9f7b-23ad8a1696cb", > "Creating host mapping for service compute-0.localdomain", > "Found 1 unmapped computes in cell: 8d8ba8aa-3e63-466c-9f7b-23ad8a1696cb", > "Debug: Facter: value for ipaddress_vxlan_sys_4789 is still nil", > "Debug: Facter: value for ipaddress6_vxlan_sys_4789 is still nil", > "Debug: Facter: value for netmask_vxlan_sys_4789 is still nil", > "Debug: Facter: value for network_vxlan_sys_4789 is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/cinder/backup_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::cinder::backup_bundle from tripleo/profile/pacemaker/cinder/backup_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::cinder_backup_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::docker_volumes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::docker_environment in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::step in JSON backend", > "Debug: hiera(): Looking up cinder_backup_short_bootstrap_node_name in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/cinder/backup.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::cinder::backup from tripleo/profile/base/cinder/backup into production", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::backup::step in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/backup.pp' in environment production", > "Debug: Automatically imported cinder::backup from cinder/backup into production", > "Debug: hiera(): Looking up cinder::backup::enabled in JSON backend", > "Debug: hiera(): Looking up cinder::backup::manage_service in JSON backend", > "Debug: hiera(): Looking up cinder::backup::package_ensure in JSON backend", > "Debug: hiera(): Looking up cinder::backup::backup_manager in JSON backend", > "Debug: hiera(): Looking up cinder::backup::backup_api_class in JSON backend", > "Debug: hiera(): Looking up cinder::backup::backup_name_template in JSON backend", > "Debug: hiera(): Looking up cinder_backup_short_node_names in JSON backend", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-cinder-backup-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[openstack-cinder-backup] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/backup_manager] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/backup_api_class] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/backup_name_template] with 'before'", > "Debug: Adding relationship from Cinder_config[DEFAULT/backup_manager] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/backup_api_class] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/backup_name_template] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::service::begin] to Service[cinder-backup] with 'notify'", > "Debug: Adding relationship from Service[cinder-backup] to Anchor[cinder::service::end] with 'notify'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.31 seconds", > "Info: Applying configuration version '1538485091'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-cinder-backup-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[openstack-cinder-backup]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/backup_manager]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/backup_api_class]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/backup_name_template]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]/notify: subscribes to Service[cinder-backup]", > "Debug: /Stage[main]/Cinder::Backup/Service[cinder-backup]/notify: subscribes to Anchor[cinder::service::end]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_manager]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_api_class]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_name_template]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Backup_bundle/Pacemaker::Property[cinder-backup-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[openstack-cinder-backup]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_manager]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_api_class]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_name_template]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: Class[Tripleo::Profile::Pacemaker::Cinder::Backup_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Pacemaker::Cinder::Backup_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Cinder::Backup]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Base::Cinder::Backup]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Backup]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Backup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_manager]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_manager]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_api_class]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_api_class]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_name_template]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_name_template]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[cinder-backup-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Pacemaker::Property[cinder-backup-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-17247bl returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-17247bl property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: /Stage[main]/Cinder::Backup/Service[cinder-backup]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backup/Service[cinder-backup]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-158iinc returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-158iinc property show | grep cinder-backup-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep cinder-backup-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-xekhpr returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-xekhpr property set --node controller-0 cinder-backup-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-xekhpr diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-xekhpr.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 cinder-backup-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Backup_bundle/Pacemaker::Property[cinder-backup-role-controller-0]/Pcmk_property[property-controller-0-cinder-backup-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Backup_bundle/Pacemaker::Property[cinder-backup-role-controller-0]/Pcmk_property[property-controller-0-cinder-backup-role]: The container Pacemaker::Property[cinder-backup-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[cinder-backup-role-controller-0]: Unscheduling all events on Pacemaker::Property[cinder-backup-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[openstack-cinder-backup]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Pacemaker::Resource::Bundle[openstack-cinder-backup]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-11y9bpy returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-11y9bpy constraint list | grep location-openstack-cinder-backup > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-hb0qeo returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-hb0qeo resource show openstack-cinder-backup > /dev/null 2>&1", > "Debug: Exists: bundle openstack-cinder-backup exists 1 location exists 1 deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-7kd1zo returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-7kd1zo resource bundle create openstack-cinder-backup container docker image=192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest replicas=1 options=\"--ipc=host --privileged=true --user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=cinder-backup-etc-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=cinder-backup-etc-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=cinder-backup-etc-pki-ca-trust-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=cinder-backup-etc-pki-ca-trust-source-anchors source-dir=/etc/pki/ca-trust/source/anchors target-dir=/etc/pki/ca-trust/source/anchors options=ro storage-map id=cinder-backup-etc-pki-tls-certs-ca-bundle.crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=cinder-backup-etc-pki-tls-certs-ca-bundle.trust.crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=cinder-backup-etc-pki-tls-cert.pem source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=cinder-backup-dev-log source-dir=/dev/log target-dir=/dev/log options=rw storage-map id=cinder-backup-etc-ssh-ssh_known_hosts source-dir=/etc/ssh/ssh_known_hosts target-dir=/etc/ssh/ssh_known_hosts options=ro storage-map id=cinder-backup-etc-puppet source-dir=/etc/puppet target-dir=/etc/puppet options=ro storage-map id=cinder-backup-var-lib-kolla-config_files-cinder_backup.json source-dir=/var/lib/kolla/config_files/cinder_backup.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=cinder-backup-var-lib-config-data-puppet-generated-cinder- source-dir=/var/lib/config-data/puppet-generated/cinder/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=cinder-backup-etc-iscsi source-dir=/etc/iscsi target-dir=/var/lib/kolla/config_files/src-iscsid options=ro storage-map id=cinder-backup-etc-ceph source-dir=/etc/ceph target-dir=/var/lib/kolla/config_files/src-ceph options=ro storage-map id=cinder-backup-dev- source-dir=/dev/ target-dir=/dev/ options=rw storage-map id=cinder-backup-run- source-dir=/run/ target-dir=/run/ options=rw storage-map id=cinder-backup-sys source-dir=/sys target-dir=/sys options=rw storage-map id=cinder-backup-lib-modules source-dir=/lib/modules target-dir=/lib/modules options=ro storage-map id=cinder-backup-var-lib-cinder source-dir=/var/lib/cinder target-dir=/var/lib/cinder options=rw storage-map id=cinder-backup-var-log-containers-cinder source-dir=/var/log/containers/cinder target-dir=/var/log/cinder options=rw --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-7kd1zo diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-7kd1zo.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location openstack-cinder-backup rule resource-discovery=exclusive score=0 cinder-backup-role eq true", > "Debug: location_rule_create: constraint location openstack-cinder-backup rule resource-discovery=exclusive score=0 cinder-backup-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1bzxcqi returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1bzxcqi constraint location openstack-cinder-backup rule resource-discovery=exclusive score=0 cinder-backup-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1bzxcqi diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-1bzxcqi.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-lbqa7l returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-lbqa7l resource enable openstack-cinder-backup", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20181002-8-lbqa7l diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20181002-8-lbqa7l.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Backup_bundle/Pacemaker::Resource::Bundle[openstack-cinder-backup]/Pcmk_bundle[openstack-cinder-backup]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Backup_bundle/Pacemaker::Resource::Bundle[openstack-cinder-backup]/Pcmk_bundle[openstack-cinder-backup]: The container Pacemaker::Resource::Bundle[openstack-cinder-backup] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[openstack-cinder-backup]: Unscheduling all events on Pacemaker::Resource::Bundle[openstack-cinder-backup]", > "Debug: Finishing transaction 29586780", > "Notice: Applied catalog in 33.57 seconds", > " Total: 6", > " Success: 6", > " Skipped: 157", > " Total: 165", > " Out of sync: 6", > " Changed: 6", > " Pcmk property: 10.77", > " Last run: 1538485127", > " Config retrieval: 2.65", > " Pcmk bundle: 22.26", > " Total: 35.68", > " Config: 1538485091", > "Debug: Finishing transaction 45085280", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle'", > "+ puppet apply --debug --verbose --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle'", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/backup.pp\", 63]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder/backup.pp\", 33]", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18", > "stdout: Running batches of 50 until complete", > "+---------------------------------------------+--------------+-----------+", > "| Migration | Total Needed | Completed |", > "| create_incomplete_consumers | 0 | 0 |", > "| delete_build_requests_with_no_instance_uuid | 0 | 0 |", > "| migrate_instances_add_request_spec | 0 | 0 |", > "| migrate_keypairs_to_api_db | 0 | 0 |", > "| migrate_quota_classes_to_api_db | 0 | 0 |", > "| migrate_quota_limits_to_api_db | 0 | 0 |", > "| migration_migrate_to_uuid | 0 | 0 |", > "| populate_missing_availability_zones | 0 | 0 |", > "| populate_queued_for_delete | 0 | 0 |", > "| populate_uuids | 0 | 0 |", > "| service_uuids_online_data_migration | 0 | 0 |", > "stdout: Running batches of 50 until complete.", > "+--------------------------------------------+--------------+-----------+", > "| Migration | Total Needed | Completed |", > "| attachment_specs_online_data_migration | 0 | 0 |", > "| backup_service_online_migration | 0 | 0 |", > "| service_uuids_online_data_migration | 0 | 0 |", > "| shared_targets_online_data_migration | 0 | 0 |", > "| volume_service_uuids_online_data_migration | 0 | 0 |", > "stderr: Deprecated: Option \"logdir\" from group \"DEFAULT\" is deprecated. Use option \"log-dir\" from group \"DEFAULT\"." > ] >} >2018-10-02 08:59:08,292 p=1004 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks5.json exists] ******** >2018-10-02 08:59:08,293 p=1004 u=mistral | Tuesday 02 October 2018 08:59:08 -0400 (0:00:02.548) 0:30:21.026 ******* >2018-10-02 08:59:08,537 p=1004 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:59:08,581 p=1004 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:59:08,618 p=1004 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 08:59:08,662 p=1004 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 5] ******************** >2018-10-02 08:59:08,662 p=1004 u=mistral | Tuesday 02 October 2018 08:59:08 -0400 (0:00:00.369) 0:30:21.396 ******* >2018-10-02 08:59:08,704 p=1004 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:59:08,738 p=1004 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:59:08,753 p=1004 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 08:59:08,786 p=1004 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (bootstrap tasks) for step 5] *** >2018-10-02 08:59:08,787 p=1004 u=mistral | Tuesday 02 October 2018 08:59:08 -0400 (0:00:00.124) 0:30:21.520 ******* >2018-10-02 08:59:08,829 p=1004 u=mistral | skipping: [controller-0] => {} >2018-10-02 08:59:08,864 p=1004 u=mistral | skipping: [ceph-0] => {} >2018-10-02 08:59:08,877 p=1004 u=mistral | skipping: [compute-0] => {} >2018-10-02 08:59:08,885 p=1004 u=mistral | PLAY [Server Post Deployments] ************************************************* >2018-10-02 08:59:08,921 p=1004 u=mistral | TASK [include_tasks] *********************************************************** >2018-10-02 08:59:08,921 p=1004 u=mistral | Tuesday 02 October 2018 08:59:08 -0400 (0:00:00.134) 0:30:21.654 ******* >2018-10-02 08:59:09,055 p=1004 u=mistral | PLAY [External deployment Post Deploy tasks] *********************************** >2018-10-02 08:59:09,059 p=1004 u=mistral | PLAY RECAP ********************************************************************* >2018-10-02 08:59:09,060 p=1004 u=mistral | ceph-0 : ok=133 changed=55 unreachable=0 failed=0 >2018-10-02 08:59:09,060 p=1004 u=mistral | compute-0 : ok=154 changed=69 unreachable=0 failed=0 >2018-10-02 08:59:09,060 p=1004 u=mistral | controller-0 : ok=202 changed=92 unreachable=0 failed=0 >2018-10-02 08:59:09,060 p=1004 u=mistral | undercloud : ok=32 changed=19 unreachable=0 failed=0 >2018-10-02 08:59:09,061 p=1004 u=mistral | Tuesday 02 October 2018 08:59:09 -0400 (0:00:00.139) 0:30:21.794 ******* >2018-10-02 08:59:09,061 p=1004 u=mistral | =============================================================================== >2018-10-02 10:39:28,773 p=605 u=mistral | Using /var/lib/mistral/overcloud/ansible.cfg as config file >2018-10-02 10:39:29,797 p=605 u=mistral | PLAY [Gather facts from undercloud] ******************************************** >2018-10-02 10:39:29,810 p=605 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-10-02 10:39:29,810 p=605 u=mistral | Tuesday 02 October 2018 10:39:29 -0400 (0:00:00.084) 0:00:00.084 ******* >2018-10-02 10:39:43,441 p=605 u=mistral | ok: [undercloud] >2018-10-02 10:39:43,461 p=605 u=mistral | PLAY [Gather facts from overcloud] ********************************************* >2018-10-02 10:39:43,480 p=605 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-10-02 10:39:43,480 p=605 u=mistral | Tuesday 02 October 2018 10:39:43 -0400 (0:00:13.669) 0:00:13.753 ******* >2018-10-02 10:39:47,553 p=605 u=mistral | ok: [compute-0] >2018-10-02 10:39:47,659 p=605 u=mistral | ok: [controller-0] >2018-10-02 10:39:47,742 p=605 u=mistral | ok: [ceph-0] >2018-10-02 10:39:47,776 p=605 u=mistral | PLAY [Load global variables] *************************************************** >2018-10-02 10:39:47,800 p=605 u=mistral | TASK [include_vars] ************************************************************ >2018-10-02 10:39:47,801 p=605 u=mistral | Tuesday 02 October 2018 10:39:47 -0400 (0:00:04.320) 0:00:18.074 ******* >2018-10-02 10:39:47,874 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.32]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.32]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.19]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.6]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.6]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.6]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.6]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.6]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.28]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.13]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.10]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.28]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.20]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.10]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.10]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.10]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.14]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.25]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.22]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.14]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.12]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.123]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.12]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.12]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-10-02 10:39:47,896 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.32]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.32]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.19]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.6]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.6]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.6]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.6]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.6]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.28]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.13]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.10]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.28]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.20]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.10]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.10]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.10]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.14]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.25]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.22]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.14]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.12]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.123]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.12]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.12]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-10-02 10:39:47,902 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.32]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.32]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.19]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.6]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.6]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.6]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.6]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.6]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.28]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.13]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.10]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.28]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.20]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.10]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.10]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.10]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.14]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.25]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.22]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.14]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.12]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.123]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.12]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.12]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-10-02 10:39:47,918 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.32]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.32]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.19]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.6]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.6]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.6]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.6]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.6]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.28]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.13]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.10]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.28]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.20]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.10]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.10]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.10]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.14]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.25]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.22]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.14]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.12]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.123]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.12]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.12]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-10-02 10:39:47,924 p=605 u=mistral | PLAY [Common roles for TripleO servers] **************************************** >2018-10-02 10:39:47,948 p=605 u=mistral | TASK [tripleo-bootstrap : Deploy required packages to bootstrap TripleO] ******* >2018-10-02 10:39:47,949 p=605 u=mistral | Tuesday 02 October 2018 10:39:47 -0400 (0:00:00.147) 0:00:18.222 ******* >2018-10-02 10:39:48,775 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-10-02 10:39:48,788 p=605 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-10-02 10:39:48,830 p=605 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-10-02 10:39:48,856 p=605 u=mistral | TASK [tripleo-bootstrap : Check required packages are installed] *************** >2018-10-02 10:39:48,856 p=605 u=mistral | Tuesday 02 October 2018 10:39:48 -0400 (0:00:00.907) 0:00:19.129 ******* >2018-10-02 10:39:49,275 p=605 u=mistral | changed: [ceph-0] => (item=openstack-heat-agents) => {"changed": true, "cmd": ["rpm", "-q", "openstack-heat-agents"], "delta": "0:00:00.036547", "end": "2018-10-02 10:39:49.243816", "item": "openstack-heat-agents", "rc": 0, "start": "2018-10-02 10:39:49.207269", "stderr": "", "stderr_lines": [], "stdout": "openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch", "stdout_lines": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 10:39:49,286 p=605 u=mistral | changed: [compute-0] => (item=openstack-heat-agents) => {"changed": true, "cmd": ["rpm", "-q", "openstack-heat-agents"], "delta": "0:00:00.035967", "end": "2018-10-02 10:39:49.249287", "item": "openstack-heat-agents", "rc": 0, "start": "2018-10-02 10:39:49.213320", "stderr": "", "stderr_lines": [], "stdout": "openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch", "stdout_lines": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 10:39:49,294 p=605 u=mistral | changed: [controller-0] => (item=openstack-heat-agents) => {"changed": true, "cmd": ["rpm", "-q", "openstack-heat-agents"], "delta": "0:00:00.037837", "end": "2018-10-02 10:39:49.252198", "item": "openstack-heat-agents", "rc": 0, "start": "2018-10-02 10:39:49.214361", "stderr": "", "stderr_lines": [], "stdout": "openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch", "stdout_lines": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 10:39:49,475 p=605 u=mistral | changed: [ceph-0] => (item=jq) => {"changed": true, "cmd": ["rpm", "-q", "jq"], "delta": "0:00:00.035255", "end": "2018-10-02 10:39:49.448458", "item": "jq", "rc": 0, "start": "2018-10-02 10:39:49.413203", "stderr": "", "stderr_lines": [], "stdout": "jq-1.3-4.el7ost.x86_64", "stdout_lines": ["jq-1.3-4.el7ost.x86_64"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 10:39:49,477 p=605 u=mistral | [WARNING]: Consider using the yum, dnf or zypper module rather than running >rpm. If you need to use command because yum, dnf or zypper is insufficient you >can add warn=False to this command task or set command_warnings=False in >ansible.cfg to get rid of this message. > >2018-10-02 10:39:49,482 p=605 u=mistral | changed: [compute-0] => (item=jq) => {"changed": true, "cmd": ["rpm", "-q", "jq"], "delta": "0:00:00.035718", "end": "2018-10-02 10:39:49.456762", "item": "jq", "rc": 0, "start": "2018-10-02 10:39:49.421044", "stderr": "", "stderr_lines": [], "stdout": "jq-1.3-4.el7ost.x86_64", "stdout_lines": ["jq-1.3-4.el7ost.x86_64"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 10:39:49,502 p=605 u=mistral | changed: [controller-0] => (item=jq) => {"changed": true, "cmd": ["rpm", "-q", "jq"], "delta": "0:00:00.037306", "end": "2018-10-02 10:39:49.469283", "item": "jq", "rc": 0, "start": "2018-10-02 10:39:49.431977", "stderr": "", "stderr_lines": [], "stdout": "jq-1.3-4.el7ost.x86_64", "stdout_lines": ["jq-1.3-4.el7ost.x86_64"], "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."]} >2018-10-02 10:39:49,531 p=605 u=mistral | TASK [tripleo-bootstrap : Create /var/lib/heat-config/tripleo-config-download directory for deployment data] *** >2018-10-02 10:39:49,531 p=605 u=mistral | Tuesday 02 October 2018 10:39:49 -0400 (0:00:00.674) 0:00:19.804 ******* >2018-10-02 10:39:49,931 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:39:49,938 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:39:49,939 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:39:49,965 p=605 u=mistral | TASK [tripleo-ssh-known-hosts : Add hosts key in /etc/ssh/ssh_known_hosts for live/cold-migration] *** >2018-10-02 10:39:49,965 p=605 u=mistral | Tuesday 02 October 2018 10:39:49 -0400 (0:00:00.434) 0:00:20.238 ******* >2018-10-02 10:39:50,408 p=605 u=mistral | changed: [ceph-0] => (item=controller-0) => {"backup": "", "changed": true, "item": "controller-0", "msg": "line added"} >2018-10-02 10:39:50,414 p=605 u=mistral | changed: [compute-0] => (item=controller-0) => {"backup": "", "changed": true, "item": "controller-0", "msg": "line added"} >2018-10-02 10:39:50,422 p=605 u=mistral | changed: [controller-0] => (item=controller-0) => {"backup": "", "changed": true, "item": "controller-0", "msg": "line added"} >2018-10-02 10:39:50,616 p=605 u=mistral | changed: [ceph-0] => (item=compute-0) => {"backup": "", "changed": true, "item": "compute-0", "msg": "line added"} >2018-10-02 10:39:50,618 p=605 u=mistral | changed: [compute-0] => (item=compute-0) => {"backup": "", "changed": true, "item": "compute-0", "msg": "line added"} >2018-10-02 10:39:50,677 p=605 u=mistral | changed: [controller-0] => (item=compute-0) => {"backup": "", "changed": true, "item": "compute-0", "msg": "line added"} >2018-10-02 10:39:50,834 p=605 u=mistral | changed: [compute-0] => (item=ceph-0) => {"backup": "", "changed": true, "item": "ceph-0", "msg": "line added"} >2018-10-02 10:39:50,878 p=605 u=mistral | changed: [controller-0] => (item=ceph-0) => {"backup": "", "changed": true, "item": "ceph-0", "msg": "line added"} >2018-10-02 10:39:50,920 p=605 u=mistral | changed: [ceph-0] => (item=ceph-0) => {"backup": "", "changed": true, "item": "ceph-0", "msg": "line added"} >2018-10-02 10:39:50,929 p=605 u=mistral | PLAY [Overcloud deploy step tasks for step 0] ********************************** >2018-10-02 10:39:50,937 p=605 u=mistral | PLAY [Server deployments] ****************************************************** >2018-10-02 10:39:50,964 p=605 u=mistral | TASK [include_tasks] *********************************************************** >2018-10-02 10:39:50,964 p=605 u=mistral | Tuesday 02 October 2018 10:39:50 -0400 (0:00:00.998) 0:00:21.237 ******* >2018-10-02 10:39:51,610 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0, compute-0, ceph-0 >2018-10-02 10:39:51,632 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 10:39:51,653 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0, compute-0, ceph-0 >2018-10-02 10:39:51,674 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 10:39:51,698 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 10:39:51,722 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 10:39:51,746 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 10:39:51,768 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 10:39:51,791 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >2018-10-02 10:39:51,814 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 10:39:51,836 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 10:39:51,857 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 10:39:51,879 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 10:39:51,901 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 10:39:51,924 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 10:39:51,946 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >2018-10-02 10:39:51,968 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 10:39:51,991 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 10:39:52,013 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 10:39:52,036 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 10:39:52,060 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 10:39:52,083 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 10:39:52,107 p=605 u=mistral | included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >2018-10-02 10:39:52,138 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:39:52,138 p=605 u=mistral | Tuesday 02 October 2018 10:39:52 -0400 (0:00:01.173) 0:00:22.411 ******* >2018-10-02 10:39:52,208 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "05ceff71-03c4-4ebd-a9f1-b5af35ba895a"}, "changed": false} >2018-10-02 10:39:52,232 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "d7a9d92d-ad25-41a0-901e-63123382ec83"}, "changed": false} >2018-10-02 10:39:52,265 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "19664322-0639-4489-bc4a-ea7b4675f911"}, "changed": false} >2018-10-02 10:39:52,293 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:39:52,293 p=605 u=mistral | Tuesday 02 October 2018 10:39:52 -0400 (0:00:00.154) 0:00:22.566 ******* >2018-10-02 10:39:52,363 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:39:52,385 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:39:52,414 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:39:52,438 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:39:52,439 p=605 u=mistral | Tuesday 02 October 2018 10:39:52 -0400 (0:00:00.145) 0:00:22.712 ******* >2018-10-02 10:39:52,471 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,500 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,512 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,536 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:39:52,536 p=605 u=mistral | Tuesday 02 October 2018 10:39:52 -0400 (0:00:00.097) 0:00:22.809 ******* >2018-10-02 10:39:52,563 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,591 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,606 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,629 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:39:52,629 p=605 u=mistral | Tuesday 02 October 2018 10:39:52 -0400 (0:00:00.092) 0:00:22.902 ******* >2018-10-02 10:39:52,660 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,691 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,704 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,734 p=605 u=mistral | TASK [Render deployment file for NetworkDeployment for check-mode] ************* >2018-10-02 10:39:52,734 p=605 u=mistral | Tuesday 02 October 2018 10:39:52 -0400 (0:00:00.104) 0:00:23.007 ******* >2018-10-02 10:39:52,764 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,797 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,810 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,837 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:39:52,837 p=605 u=mistral | Tuesday 02 October 2018 10:39:52 -0400 (0:00:00.102) 0:00:23.110 ******* >2018-10-02 10:39:52,868 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,899 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,911 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,936 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:39:52,937 p=605 u=mistral | Tuesday 02 October 2018 10:39:52 -0400 (0:00:00.099) 0:00:23.210 ******* >2018-10-02 10:39:52,966 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:52,996 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:53,008 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:53,033 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:39:53,034 p=605 u=mistral | Tuesday 02 October 2018 10:39:53 -0400 (0:00:00.096) 0:00:23.307 ******* >2018-10-02 10:39:53,064 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:53,096 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:53,184 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:53,257 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:39:53,257 p=605 u=mistral | Tuesday 02 October 2018 10:39:53 -0400 (0:00:00.223) 0:00:23.531 ******* >2018-10-02 10:39:53,289 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:39:53,319 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:39:53,334 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:39:53,360 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:39:53,360 p=605 u=mistral | Tuesday 02 October 2018 10:39:53 -0400 (0:00:00.102) 0:00:23.633 ******* >2018-10-02 10:39:53,392 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:53,424 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:53,437 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:53,465 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:39:53,465 p=605 u=mistral | Tuesday 02 October 2018 10:39:53 -0400 (0:00:00.105) 0:00:23.739 ******* >2018-10-02 10:39:53,498 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:39:53,530 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:39:53,549 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:39:53,582 p=605 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-10-02 10:39:53,582 p=605 u=mistral | Tuesday 02 October 2018 10:39:53 -0400 (0:00:00.116) 0:00:23.856 ******* >2018-10-02 10:39:54,353 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "bf412730ea10b3c2d6804469152fa1160984f6dc", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-19664322-0639-4489-bc4a-ea7b4675f911", "gid": 0, "group": "root", "md5sum": "bbba50b64d030c3cf44bd3e3e27955e8", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8774, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491193.72-81171947875942/source", "state": "file", "uid": 0} >2018-10-02 10:39:54,359 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "4c46fdb14977468be0d7ee289e6e98c6ae15dc8b", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-d7a9d92d-ad25-41a0-901e-63123382ec83", "gid": 0, "group": "root", "md5sum": "aca27a7626a5271eebe4b8207ad90201", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9259, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491193.68-213794479241989/source", "state": "file", "uid": 0} >2018-10-02 10:39:54,363 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "cbb3cc7bb9f7fe6e60179c2bdbb24287fe374219", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-05ceff71-03c4-4ebd-a9f1-b5af35ba895a", "gid": 0, "group": "root", "md5sum": "7e7e9edf55f79acc7df38ef98443dedb", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491193.65-251717666469447/source", "state": "file", "uid": 0} >2018-10-02 10:39:54,392 p=605 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-10-02 10:39:54,392 p=605 u=mistral | Tuesday 02 October 2018 10:39:54 -0400 (0:00:00.809) 0:00:24.665 ******* >2018-10-02 10:39:54,619 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:39:54,640 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:39:54,645 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:39:54,676 p=605 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-10-02 10:39:54,676 p=605 u=mistral | Tuesday 02 October 2018 10:39:54 -0400 (0:00:00.284) 0:00:24.949 ******* >2018-10-02 10:39:54,708 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:54,740 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:54,756 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:54,786 p=605 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-10-02 10:39:54,786 p=605 u=mistral | Tuesday 02 October 2018 10:39:54 -0400 (0:00:00.110) 0:00:25.059 ******* >2018-10-02 10:39:54,819 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:54,850 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:54,867 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:54,895 p=605 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-10-02 10:39:54,895 p=605 u=mistral | Tuesday 02 October 2018 10:39:54 -0400 (0:00:00.109) 0:00:25.168 ******* >2018-10-02 10:39:54,926 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:54,955 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:54,973 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:39:55,004 p=605 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-10-02 10:39:55,004 p=605 u=mistral | Tuesday 02 October 2018 10:39:55 -0400 (0:00:00.108) 0:00:25.277 ******* >2018-10-02 10:40:10,673 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/19664322-0639-4489-bc4a-ea7b4675f911.notify.json)", "delta": "0:00:15.392123", "end": "2018-10-02 10:40:10.636290", "rc": 0, "start": "2018-10-02 10:39:55.244167", "stderr": "[2018-10-02 10:39:55,273] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/19664322-0639-4489-bc4a-ea7b4675f911.json\n[2018-10-02 10:40:10,217] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.32/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.32/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:10,217] (heat-config) [DEBUG] [2018-10-02 10:39:55,298] (heat-config) [INFO] interface_name=nic1\n[2018-10-02 10:39:55,298] (heat-config) [INFO] bridge_name=br-ex\n[2018-10-02 10:39:55,298] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f\n[2018-10-02 10:39:55,298] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:39:55,299] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-r7iuindp3fim-0-fcdemalfee52-NetworkDeployment-mzlurkbczhfq-TripleOSoftwareDeployment-3afkpa7let4k/41e5c8c7-344c-473c-9b06-f35c1d0f1c9e\n[2018-10-02 10:39:55,299] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:39:55,299] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:39:55,299] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/19664322-0639-4489-bc4a-ea7b4675f911\n[2018-10-02 10:40:10,213] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-10-02 10:40:10,213] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.32/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.32/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth0\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan40\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan40\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-10-02 10:40:10,213] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/19664322-0639-4489-bc4a-ea7b4675f911\n\n[2018-10-02 10:40:10,217] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:10,218] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/19664322-0639-4489-bc4a-ea7b4675f911.json < /var/lib/heat-config/deployed/19664322-0639-4489-bc4a-ea7b4675f911.notify.json\n[2018-10-02 10:40:10,629] (heat-config) [INFO] \n[2018-10-02 10:40:10,629] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:39:55,273] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/19664322-0639-4489-bc4a-ea7b4675f911.json", "[2018-10-02 10:40:10,217] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.32/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.32/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:10,217] (heat-config) [DEBUG] [2018-10-02 10:39:55,298] (heat-config) [INFO] interface_name=nic1", "[2018-10-02 10:39:55,298] (heat-config) [INFO] bridge_name=br-ex", "[2018-10-02 10:39:55,298] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", "[2018-10-02 10:39:55,298] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:39:55,299] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-r7iuindp3fim-0-fcdemalfee52-NetworkDeployment-mzlurkbczhfq-TripleOSoftwareDeployment-3afkpa7let4k/41e5c8c7-344c-473c-9b06-f35c1d0f1c9e", "[2018-10-02 10:39:55,299] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:39:55,299] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:39:55,299] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/19664322-0639-4489-bc4a-ea7b4675f911", "[2018-10-02 10:40:10,213] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-10-02 10:40:10,213] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.32/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.32/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.", "[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.", "[2018/10/02 10:39:56 AM] [INFO] Finding active nics", "[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic", "[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic", "[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic", "[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic", "[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2", "[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1", "[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0", "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0", "[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0", "[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated", "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1", "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30", "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40", "[2018/10/02 10:39:56 AM] [INFO] applying network configs...", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated", "[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1", "[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth0", "[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan40", "[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan40", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-10-02 10:40:10,213] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/19664322-0639-4489-bc4a-ea7b4675f911", "", "[2018-10-02 10:40:10,217] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:10,218] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/19664322-0639-4489-bc4a-ea7b4675f911.json < /var/lib/heat-config/deployed/19664322-0639-4489-bc4a-ea7b4675f911.notify.json", "[2018-10-02 10:40:10,629] (heat-config) [INFO] ", "[2018-10-02 10:40:10,629] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:15,547 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/d7a9d92d-ad25-41a0-901e-63123382ec83.notify.json)", "delta": "0:00:20.299610", "end": "2018-10-02 10:40:15.511650", "rc": 0, "start": "2018-10-02 10:39:55.212040", "stderr": "[2018-10-02 10:39:55,240] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d7a9d92d-ad25-41a0-901e-63123382ec83.json\n[2018-10-02 10:40:15,040] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 10:39:57 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:15,041] (heat-config) [DEBUG] [2018-10-02 10:39:55,266] (heat-config) [INFO] interface_name=nic1\n[2018-10-02 10:39:55,267] (heat-config) [INFO] bridge_name=br-ex\n[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6\n[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-il33pv3dy25e-0-lgbwge7jtszc-NetworkDeployment-nhdnqjpgcnba-TripleOSoftwareDeployment-ob4gqrgkemxd/9e79e1a0-7a08-4bce-87e1-4f79939de932\n[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:39:55,267] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d7a9d92d-ad25-41a0-901e-63123382ec83\n[2018-10-02 10:40:15,035] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-10-02 10:40:15,035] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth2\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1\n[2018/10/02 10:39:57 AM] [INFO] running ifup on interface: eth0\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan20\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan50\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan20\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-10-02 10:40:15,036] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d7a9d92d-ad25-41a0-901e-63123382ec83\n\n[2018-10-02 10:40:15,041] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:15,042] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d7a9d92d-ad25-41a0-901e-63123382ec83.json < /var/lib/heat-config/deployed/d7a9d92d-ad25-41a0-901e-63123382ec83.notify.json\n[2018-10-02 10:40:15,503] (heat-config) [INFO] \n[2018-10-02 10:40:15,504] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:39:55,240] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d7a9d92d-ad25-41a0-901e-63123382ec83.json", "[2018-10-02 10:40:15,040] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 10:39:57 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:15,041] (heat-config) [DEBUG] [2018-10-02 10:39:55,266] (heat-config) [INFO] interface_name=nic1", "[2018-10-02 10:39:55,267] (heat-config) [INFO] bridge_name=br-ex", "[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", "[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-il33pv3dy25e-0-lgbwge7jtszc-NetworkDeployment-nhdnqjpgcnba-TripleOSoftwareDeployment-ob4gqrgkemxd/9e79e1a0-7a08-4bce-87e1-4f79939de932", "[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:39:55,267] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d7a9d92d-ad25-41a0-901e-63123382ec83", "[2018-10-02 10:40:15,035] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-10-02 10:40:15,035] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.", "[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.", "[2018/10/02 10:39:56 AM] [INFO] Finding active nics", "[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic", "[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic", "[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic", "[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic", "[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2", "[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1", "[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0", "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0", "[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0", "[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated", "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1", "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20", "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30", "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50", "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2", "[2018/10/02 10:39:56 AM] [INFO] applying network configs...", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated", "[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth2", "[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1", "[2018/10/02 10:39:57 AM] [INFO] running ifup on interface: eth0", "[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan20", "[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan50", "[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan20", "[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-10-02 10:40:15,036] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d7a9d92d-ad25-41a0-901e-63123382ec83", "", "[2018-10-02 10:40:15,041] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:15,042] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d7a9d92d-ad25-41a0-901e-63123382ec83.json < /var/lib/heat-config/deployed/d7a9d92d-ad25-41a0-901e-63123382ec83.notify.json", "[2018-10-02 10:40:15,503] (heat-config) [INFO] ", "[2018-10-02 10:40:15,504] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:24,496 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/05ceff71-03c4-4ebd-a9f1-b5af35ba895a.notify.json)", "delta": "0:00:29.273676", "end": "2018-10-02 10:40:24.454525", "rc": 0, "start": "2018-10-02 10:39:55.180849", "stderr": "[2018-10-02 10:39:55,211] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/05ceff71-03c4-4ebd-a9f1-b5af35ba895a.json\n[2018-10-02 10:40:23,976] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.22/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.123/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.22/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.123/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-ex\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: br-ex\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-ex\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 10:40:10 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:18 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 10:40:22 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:23,976] (heat-config) [DEBUG] [2018-10-02 10:39:55,237] (heat-config) [INFO] interface_name=nic1\n[2018-10-02 10:39:55,237] (heat-config) [INFO] bridge_name=br-ex\n[2018-10-02 10:39:55,237] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d\n[2018-10-02 10:39:55,237] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:39:55,237] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-ovd2peurroaq-0-7fwquijmepyp-NetworkDeployment-ebgm3wuirtbl-TripleOSoftwareDeployment-3wgdyri347et/fa27be6f-6789-4d37-add4-b7f1b7a2d107\n[2018-10-02 10:39:55,238] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:39:55,238] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:39:55,238] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/05ceff71-03c4-4ebd-a9f1-b5af35ba895a\n[2018-10-02 10:40:23,971] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-10-02 10:40:23,972] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.22/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.123/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.22/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.123/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-ex\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: br-ex\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-ex\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-ex\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth2\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth1\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth0\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan50\n[2018/10/02 10:40:10 AM] [INFO] running ifup on interface: vlan20\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 10:40:18 AM] [INFO] running ifup on interface: vlan40\n[2018/10/02 10:40:22 AM] [INFO] running ifup on interface: vlan20\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan30\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan40\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-10-02 10:40:23,972] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/05ceff71-03c4-4ebd-a9f1-b5af35ba895a\n\n[2018-10-02 10:40:23,977] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:23,977] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/05ceff71-03c4-4ebd-a9f1-b5af35ba895a.json < /var/lib/heat-config/deployed/05ceff71-03c4-4ebd-a9f1-b5af35ba895a.notify.json\n[2018-10-02 10:40:24,447] (heat-config) [INFO] \n[2018-10-02 10:40:24,447] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:39:55,211] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/05ceff71-03c4-4ebd-a9f1-b5af35ba895a.json", "[2018-10-02 10:40:23,976] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.22/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.123/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.22/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.123/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-ex\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: br-ex\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-ex\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 10:40:10 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:18 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 10:40:22 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:23,976] (heat-config) [DEBUG] [2018-10-02 10:39:55,237] (heat-config) [INFO] interface_name=nic1", "[2018-10-02 10:39:55,237] (heat-config) [INFO] bridge_name=br-ex", "[2018-10-02 10:39:55,237] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", "[2018-10-02 10:39:55,237] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:39:55,237] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-ovd2peurroaq-0-7fwquijmepyp-NetworkDeployment-ebgm3wuirtbl-TripleOSoftwareDeployment-3wgdyri347et/fa27be6f-6789-4d37-add4-b7f1b7a2d107", "[2018-10-02 10:39:55,238] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:39:55,238] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:39:55,238] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/05ceff71-03c4-4ebd-a9f1-b5af35ba895a", "[2018-10-02 10:40:23,971] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-10-02 10:40:23,972] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.22/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.123/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.22/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.123/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.", "[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.", "[2018/10/02 10:39:56 AM] [INFO] Finding active nics", "[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic", "[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic", "[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic", "[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic", "[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2", "[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1", "[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0", "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0", "[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0", "[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated", "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1", "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20", "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30", "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40", "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50", "[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-ex", "[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: br-ex", "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2", "[2018/10/02 10:39:56 AM] [INFO] applying network configs...", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-ex", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated", "[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-ex", "[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth2", "[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth1", "[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth0", "[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan50", "[2018/10/02 10:40:10 AM] [INFO] running ifup on interface: vlan20", "[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 10:40:18 AM] [INFO] running ifup on interface: vlan40", "[2018/10/02 10:40:22 AM] [INFO] running ifup on interface: vlan20", "[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan30", "[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan40", "[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-10-02 10:40:23,972] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/05ceff71-03c4-4ebd-a9f1-b5af35ba895a", "", "[2018-10-02 10:40:23,977] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:23,977] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/05ceff71-03c4-4ebd-a9f1-b5af35ba895a.json < /var/lib/heat-config/deployed/05ceff71-03c4-4ebd-a9f1-b5af35ba895a.notify.json", "[2018-10-02 10:40:24,447] (heat-config) [INFO] ", "[2018-10-02 10:40:24,447] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:24,528 p=605 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-10-02 10:40:24,529 p=605 u=mistral | Tuesday 02 October 2018 10:40:24 -0400 (0:00:29.524) 0:00:54.802 ******* >2018-10-02 10:40:24,603 p=605 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:39:55,211] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/05ceff71-03c4-4ebd-a9f1-b5af35ba895a.json", > "[2018-10-02 10:40:23,976] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.22/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.123/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.22/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.123/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-ex\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: br-ex\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-ex\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 10:40:10 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:18 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 10:40:22 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:23,976] (heat-config) [DEBUG] [2018-10-02 10:39:55,237] (heat-config) [INFO] interface_name=nic1", > "[2018-10-02 10:39:55,237] (heat-config) [INFO] bridge_name=br-ex", > "[2018-10-02 10:39:55,237] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", > "[2018-10-02 10:39:55,237] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:39:55,237] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-ovd2peurroaq-0-7fwquijmepyp-NetworkDeployment-ebgm3wuirtbl-TripleOSoftwareDeployment-3wgdyri347et/fa27be6f-6789-4d37-add4-b7f1b7a2d107", > "[2018-10-02 10:39:55,238] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:39:55,238] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:39:55,238] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/05ceff71-03c4-4ebd-a9f1-b5af35ba895a", > "[2018-10-02 10:40:23,971] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-10-02 10:40:23,972] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.22/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.123/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.22/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.123/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.", > "[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.", > "[2018/10/02 10:39:56 AM] [INFO] Finding active nics", > "[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic", > "[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic", > "[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic", > "[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic", > "[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2", > "[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1", > "[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0", > "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0", > "[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0", > "[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1", > "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20", > "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30", > "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40", > "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50", > "[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-ex", > "[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: br-ex", > "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2", > "[2018/10/02 10:39:56 AM] [INFO] applying network configs...", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-ex", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-ex", > "[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth2", > "[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth1", > "[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: eth0", > "[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan50", > "[2018/10/02 10:40:10 AM] [INFO] running ifup on interface: vlan20", > "[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 10:40:18 AM] [INFO] running ifup on interface: vlan40", > "[2018/10/02 10:40:22 AM] [INFO] running ifup on interface: vlan20", > "[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan40", > "[2018/10/02 10:40:23 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-10-02 10:40:23,972] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/05ceff71-03c4-4ebd-a9f1-b5af35ba895a", > "", > "[2018-10-02 10:40:23,977] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:23,977] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/05ceff71-03c4-4ebd-a9f1-b5af35ba895a.json < /var/lib/heat-config/deployed/05ceff71-03c4-4ebd-a9f1-b5af35ba895a.notify.json", > "[2018-10-02 10:40:24,447] (heat-config) [INFO] ", > "[2018-10-02 10:40:24,447] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:24,619 p=605 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:39:55,240] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d7a9d92d-ad25-41a0-901e-63123382ec83.json", > "[2018-10-02 10:40:15,040] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth2\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 10:39:57 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan50\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan20\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:15,041] (heat-config) [DEBUG] [2018-10-02 10:39:55,266] (heat-config) [INFO] interface_name=nic1", > "[2018-10-02 10:39:55,267] (heat-config) [INFO] bridge_name=br-ex", > "[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", > "[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-il33pv3dy25e-0-lgbwge7jtszc-NetworkDeployment-nhdnqjpgcnba-TripleOSoftwareDeployment-ob4gqrgkemxd/9e79e1a0-7a08-4bce-87e1-4f79939de932", > "[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:39:55,267] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:39:55,267] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d7a9d92d-ad25-41a0-901e-63123382ec83", > "[2018-10-02 10:40:15,035] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-10-02 10:40:15,035] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.", > "[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.", > "[2018/10/02 10:39:56 AM] [INFO] Finding active nics", > "[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic", > "[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic", > "[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic", > "[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic", > "[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2", > "[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1", > "[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0", > "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0", > "[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0", > "[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1", > "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan20", > "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30", > "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan50", > "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth2", > "[2018/10/02 10:39:56 AM] [INFO] applying network configs...", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth2", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan20", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan50", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth2", > "[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1", > "[2018/10/02 10:39:57 AM] [INFO] running ifup on interface: eth0", > "[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan20", > "[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan50", > "[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan20", > "[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 10:40:14 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-10-02 10:40:15,036] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d7a9d92d-ad25-41a0-901e-63123382ec83", > "", > "[2018-10-02 10:40:15,041] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:15,042] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d7a9d92d-ad25-41a0-901e-63123382ec83.json < /var/lib/heat-config/deployed/d7a9d92d-ad25-41a0-901e-63123382ec83.notify.json", > "[2018-10-02 10:40:15,503] (heat-config) [INFO] ", > "[2018-10-02 10:40:15,504] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:24,726 p=605 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:39:55,273] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/19664322-0639-4489-bc4a-ea7b4675f911.json", > "[2018-10-02 10:40:10,217] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.32/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.32/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.\\n[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.\\n[2018/10/02 10:39:56 AM] [INFO] Finding active nics\\n[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic\\n[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2\\n[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1\\n[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] applying network configs...\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40\\n[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1\\n[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth0\\n[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan40\\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan30\\n[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:10,217] (heat-config) [DEBUG] [2018-10-02 10:39:55,298] (heat-config) [INFO] interface_name=nic1", > "[2018-10-02 10:39:55,298] (heat-config) [INFO] bridge_name=br-ex", > "[2018-10-02 10:39:55,298] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", > "[2018-10-02 10:39:55,298] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:39:55,299] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-r7iuindp3fim-0-fcdemalfee52-NetworkDeployment-mzlurkbczhfq-TripleOSoftwareDeployment-3afkpa7let4k/41e5c8c7-344c-473c-9b06-f35c1d0f1c9e", > "[2018-10-02 10:39:55,299] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:39:55,299] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:39:55,299] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/19664322-0639-4489-bc4a-ea7b4675f911", > "[2018-10-02 10:40:10,213] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-10-02 10:40:10,213] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.32/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.32/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/10/02 10:39:55 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/10/02 10:39:55 AM] [INFO] Ifcfg net config provider created.", > "[2018/10/02 10:39:55 AM] [INFO] Not using any mapping file.", > "[2018/10/02 10:39:56 AM] [INFO] Finding active nics", > "[2018/10/02 10:39:56 AM] [INFO] lo is not an active nic", > "[2018/10/02 10:39:56 AM] [INFO] eth2 is an embedded active nic", > "[2018/10/02 10:39:56 AM] [INFO] eth1 is an embedded active nic", > "[2018/10/02 10:39:56 AM] [INFO] eth0 is an embedded active nic", > "[2018/10/02 10:39:56 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/10/02 10:39:56 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/10/02 10:39:56 AM] [INFO] nic3 mapped to: eth2", > "[2018/10/02 10:39:56 AM] [INFO] nic2 mapped to: eth1", > "[2018/10/02 10:39:56 AM] [INFO] nic1 mapped to: eth0", > "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth0", > "[2018/10/02 10:39:56 AM] [INFO] adding custom route for interface: eth0", > "[2018/10/02 10:39:56 AM] [INFO] adding bridge: br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] adding interface: eth1", > "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan30", > "[2018/10/02 10:39:56 AM] [INFO] adding vlan: vlan40", > "[2018/10/02 10:39:56 AM] [INFO] applying network configs...", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth1", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: eth0", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan30", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on interface: vlan40", > "[2018/10/02 10:39:56 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/10/02 10:39:56 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/10/02 10:39:56 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth1", > "[2018/10/02 10:39:56 AM] [INFO] running ifup on interface: eth0", > "[2018/10/02 10:40:01 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 10:40:05 AM] [INFO] running ifup on interface: vlan40", > "[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan30", > "[2018/10/02 10:40:09 AM] [INFO] running ifup on interface: vlan40", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-10-02 10:40:10,213] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/19664322-0639-4489-bc4a-ea7b4675f911", > "", > "[2018-10-02 10:40:10,217] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:10,218] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/19664322-0639-4489-bc4a-ea7b4675f911.json < /var/lib/heat-config/deployed/19664322-0639-4489-bc4a-ea7b4675f911.notify.json", > "[2018-10-02 10:40:10,629] (heat-config) [INFO] ", > "[2018-10-02 10:40:10,629] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:24,760 p=605 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:40:24,760 p=605 u=mistral | Tuesday 02 October 2018 10:40:24 -0400 (0:00:00.231) 0:00:55.033 ******* >2018-10-02 10:40:24,791 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:24,821 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:24,832 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:24,859 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:40:24,859 p=605 u=mistral | Tuesday 02 October 2018 10:40:24 -0400 (0:00:00.099) 0:00:55.133 ******* >2018-10-02 10:40:24,988 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "686fad85-ce4b-4a59-939f-7de69ffee1e9"}, "changed": false} >2018-10-02 10:40:25,016 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:40:25,016 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.156) 0:00:55.289 ******* >2018-10-02 10:40:25,144 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:40:25,171 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:40:25,171 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.154) 0:00:55.444 ******* >2018-10-02 10:40:25,192 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:25,218 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:40:25,219 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.047) 0:00:55.492 ******* >2018-10-02 10:40:25,239 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:25,266 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:40:25,266 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.047) 0:00:55.539 ******* >2018-10-02 10:40:25,284 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:25,312 p=605 u=mistral | TASK [Render deployment file for ControllerUpgradeInitDeployment for check-mode] *** >2018-10-02 10:40:25,312 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.046) 0:00:55.585 ******* >2018-10-02 10:40:25,337 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:25,366 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:40:25,366 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.054) 0:00:55.640 ******* >2018-10-02 10:40:25,384 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:25,410 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:40:25,410 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.043) 0:00:55.683 ******* >2018-10-02 10:40:25,429 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:25,454 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:25,455 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.044) 0:00:55.728 ******* >2018-10-02 10:40:25,475 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:25,501 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:25,501 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.046) 0:00:55.775 ******* >2018-10-02 10:40:25,522 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:25,549 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:40:25,549 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.047) 0:00:55.822 ******* >2018-10-02 10:40:25,570 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:25,598 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:40:25,599 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.049) 0:00:55.872 ******* >2018-10-02 10:40:25,619 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:25,647 p=605 u=mistral | TASK [Render deployment file for ControllerUpgradeInitDeployment] ************** >2018-10-02 10:40:25,648 p=605 u=mistral | Tuesday 02 October 2018 10:40:25 -0400 (0:00:00.048) 0:00:55.921 ******* >2018-10-02 10:40:26,247 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "67541b856af103e0dc1d5635352735e8325ecaf2", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerUpgradeInitDeployment-686fad85-ce4b-4a59-939f-7de69ffee1e9", "gid": 0, "group": "root", "md5sum": "068ef3740ef92446c17c498721dc2d4a", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1183, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491225.77-214635402185922/source", "state": "file", "uid": 0} >2018-10-02 10:40:26,277 p=605 u=mistral | TASK [Check if deployed file exists for ControllerUpgradeInitDeployment] ******* >2018-10-02 10:40:26,277 p=605 u=mistral | Tuesday 02 October 2018 10:40:26 -0400 (0:00:00.629) 0:00:56.551 ******* >2018-10-02 10:40:26,541 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:26,568 p=605 u=mistral | TASK [Check previous deployment rc for ControllerUpgradeInitDeployment] ******** >2018-10-02 10:40:26,568 p=605 u=mistral | Tuesday 02 October 2018 10:40:26 -0400 (0:00:00.290) 0:00:56.841 ******* >2018-10-02 10:40:26,590 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:26,615 p=605 u=mistral | TASK [Remove deployed file for ControllerUpgradeInitDeployment when previous deployment failed] *** >2018-10-02 10:40:26,615 p=605 u=mistral | Tuesday 02 October 2018 10:40:26 -0400 (0:00:00.047) 0:00:56.888 ******* >2018-10-02 10:40:26,637 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:26,663 p=605 u=mistral | TASK [Force remove deployed file for ControllerUpgradeInitDeployment] ********** >2018-10-02 10:40:26,663 p=605 u=mistral | Tuesday 02 October 2018 10:40:26 -0400 (0:00:00.047) 0:00:56.936 ******* >2018-10-02 10:40:26,683 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:26,710 p=605 u=mistral | TASK [Run deployment ControllerUpgradeInitDeployment] ************************** >2018-10-02 10:40:26,710 p=605 u=mistral | Tuesday 02 October 2018 10:40:26 -0400 (0:00:00.047) 0:00:56.983 ******* >2018-10-02 10:40:27,391 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/686fad85-ce4b-4a59-939f-7de69ffee1e9.notify.json)", "delta": "0:00:00.483901", "end": "2018-10-02 10:40:27.357426", "rc": 0, "start": "2018-10-02 10:40:26.873525", "stderr": "[2018-10-02 10:40:26,900] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/686fad85-ce4b-4a59-939f-7de69ffee1e9.json\n[2018-10-02 10:40:26,931] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:26,931] (heat-config) [DEBUG] [2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d\n[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-ovd2peurroaq-0-7fwquijmepyp-ControllerUpgradeInitDeployment-l7ld6ooyevmx/41dbaf62-3211-48be-ac8b-497d30c7e551\n[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:40:26,923] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/686fad85-ce4b-4a59-939f-7de69ffee1e9\n[2018-10-02 10:40:26,927] (heat-config) [INFO] \n[2018-10-02 10:40:26,928] (heat-config) [DEBUG] \n[2018-10-02 10:40:26,928] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/686fad85-ce4b-4a59-939f-7de69ffee1e9\n\n[2018-10-02 10:40:26,931] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:26,931] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/686fad85-ce4b-4a59-939f-7de69ffee1e9.json < /var/lib/heat-config/deployed/686fad85-ce4b-4a59-939f-7de69ffee1e9.notify.json\n[2018-10-02 10:40:27,350] (heat-config) [INFO] \n[2018-10-02 10:40:27,350] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:26,900] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/686fad85-ce4b-4a59-939f-7de69ffee1e9.json", "[2018-10-02 10:40:26,931] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:26,931] (heat-config) [DEBUG] [2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", "[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-ovd2peurroaq-0-7fwquijmepyp-ControllerUpgradeInitDeployment-l7ld6ooyevmx/41dbaf62-3211-48be-ac8b-497d30c7e551", "[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:40:26,923] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/686fad85-ce4b-4a59-939f-7de69ffee1e9", "[2018-10-02 10:40:26,927] (heat-config) [INFO] ", "[2018-10-02 10:40:26,928] (heat-config) [DEBUG] ", "[2018-10-02 10:40:26,928] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/686fad85-ce4b-4a59-939f-7de69ffee1e9", "", "[2018-10-02 10:40:26,931] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:26,931] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/686fad85-ce4b-4a59-939f-7de69ffee1e9.json < /var/lib/heat-config/deployed/686fad85-ce4b-4a59-939f-7de69ffee1e9.notify.json", "[2018-10-02 10:40:27,350] (heat-config) [INFO] ", "[2018-10-02 10:40:27,350] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:27,421 p=605 u=mistral | TASK [Output for ControllerUpgradeInitDeployment] ****************************** >2018-10-02 10:40:27,421 p=605 u=mistral | Tuesday 02 October 2018 10:40:27 -0400 (0:00:00.711) 0:00:57.694 ******* >2018-10-02 10:40:27,484 p=605 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:26,900] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/686fad85-ce4b-4a59-939f-7de69ffee1e9.json", > "[2018-10-02 10:40:26,931] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:26,931] (heat-config) [DEBUG] [2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", > "[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-ovd2peurroaq-0-7fwquijmepyp-ControllerUpgradeInitDeployment-l7ld6ooyevmx/41dbaf62-3211-48be-ac8b-497d30c7e551", > "[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:40:26,923] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:40:26,923] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/686fad85-ce4b-4a59-939f-7de69ffee1e9", > "[2018-10-02 10:40:26,927] (heat-config) [INFO] ", > "[2018-10-02 10:40:26,928] (heat-config) [DEBUG] ", > "[2018-10-02 10:40:26,928] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/686fad85-ce4b-4a59-939f-7de69ffee1e9", > "", > "[2018-10-02 10:40:26,931] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:26,931] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/686fad85-ce4b-4a59-939f-7de69ffee1e9.json < /var/lib/heat-config/deployed/686fad85-ce4b-4a59-939f-7de69ffee1e9.notify.json", > "[2018-10-02 10:40:27,350] (heat-config) [INFO] ", > "[2018-10-02 10:40:27,350] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:27,519 p=605 u=mistral | TASK [Check-mode for Run deployment ControllerUpgradeInitDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:40:27,519 p=605 u=mistral | Tuesday 02 October 2018 10:40:27 -0400 (0:00:00.098) 0:00:57.793 ******* >2018-10-02 10:40:27,537 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:27,565 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:40:27,565 p=605 u=mistral | Tuesday 02 October 2018 10:40:27 -0400 (0:00:00.045) 0:00:57.839 ******* >2018-10-02 10:40:27,632 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "79847964-2497-4372-8faf-757a6f98ded5"}, "changed": false} >2018-10-02 10:40:27,657 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a"}, "changed": false} >2018-10-02 10:40:27,696 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "97419a77-2bcc-4c19-9523-c1ead08bbf90"}, "changed": false} >2018-10-02 10:40:27,725 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:40:27,725 p=605 u=mistral | Tuesday 02 October 2018 10:40:27 -0400 (0:00:00.159) 0:00:57.999 ******* >2018-10-02 10:40:27,794 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:40:27,824 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:40:27,853 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:40:27,880 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:40:27,880 p=605 u=mistral | Tuesday 02 October 2018 10:40:27 -0400 (0:00:00.154) 0:00:58.154 ******* >2018-10-02 10:40:27,912 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:27,945 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:27,957 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:27,981 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:40:27,982 p=605 u=mistral | Tuesday 02 October 2018 10:40:27 -0400 (0:00:00.101) 0:00:58.255 ******* >2018-10-02 10:40:28,011 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,039 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,051 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,074 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:40:28,075 p=605 u=mistral | Tuesday 02 October 2018 10:40:28 -0400 (0:00:00.092) 0:00:58.348 ******* >2018-10-02 10:40:28,102 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,130 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,143 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,176 p=605 u=mistral | TASK [Render deployment file for CADeployment for check-mode] ****************** >2018-10-02 10:40:28,176 p=605 u=mistral | Tuesday 02 October 2018 10:40:28 -0400 (0:00:00.101) 0:00:58.449 ******* >2018-10-02 10:40:28,210 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,246 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,259 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,284 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:40:28,284 p=605 u=mistral | Tuesday 02 October 2018 10:40:28 -0400 (0:00:00.108) 0:00:58.557 ******* >2018-10-02 10:40:28,316 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,346 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,360 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,386 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:40:28,386 p=605 u=mistral | Tuesday 02 October 2018 10:40:28 -0400 (0:00:00.101) 0:00:58.659 ******* >2018-10-02 10:40:28,417 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,448 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,461 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,488 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:28,489 p=605 u=mistral | Tuesday 02 October 2018 10:40:28 -0400 (0:00:00.102) 0:00:58.762 ******* >2018-10-02 10:40:28,521 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,559 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,574 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,600 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:28,600 p=605 u=mistral | Tuesday 02 October 2018 10:40:28 -0400 (0:00:00.111) 0:00:58.873 ******* >2018-10-02 10:40:28,633 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:28,663 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:40:28,682 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:40:28,705 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:40:28,706 p=605 u=mistral | Tuesday 02 October 2018 10:40:28 -0400 (0:00:00.105) 0:00:58.979 ******* >2018-10-02 10:40:28,734 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,765 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,780 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:28,806 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:40:28,806 p=605 u=mistral | Tuesday 02 October 2018 10:40:28 -0400 (0:00:00.100) 0:00:59.079 ******* >2018-10-02 10:40:28,836 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:28,865 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:40:28,886 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:40:28,910 p=605 u=mistral | TASK [Render deployment file for CADeployment] ********************************* >2018-10-02 10:40:28,910 p=605 u=mistral | Tuesday 02 October 2018 10:40:28 -0400 (0:00:00.104) 0:00:59.184 ******* >2018-10-02 10:40:29,492 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "d27efafa48f321cbe397eeaa2cb7b2fba8dd0bd9", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-79847964-2497-4372-8faf-757a6f98ded5", "gid": 0, "group": "root", "md5sum": "ae11ae5777e3ff7eb4d9ca783ee2602d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2999, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491228.98-225641157440034/source", "state": "file", "uid": 0} >2018-10-02 10:40:29,512 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "21fd396a3b881607e37884830cd3e7a7eabdec05", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a", "gid": 0, "group": "root", "md5sum": "ba8fb9efe615e9c2f2fcc61ed3979575", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2996, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491229.0-71289137182045/source", "state": "file", "uid": 0} >2018-10-02 10:40:29,529 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "223ebd90e7a85f1c2c26d97aa9d7808d3a4853e2", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-97419a77-2bcc-4c19-9523-c1ead08bbf90", "gid": 0, "group": "root", "md5sum": "6d3b374fe2b4e08ea659f02287783639", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 3000, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491229.03-116631225657458/source", "state": "file", "uid": 0} >2018-10-02 10:40:29,557 p=605 u=mistral | TASK [Check if deployed file exists for CADeployment] ************************** >2018-10-02 10:40:29,557 p=605 u=mistral | Tuesday 02 October 2018 10:40:29 -0400 (0:00:00.646) 0:00:59.830 ******* >2018-10-02 10:40:29,780 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:29,795 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:29,816 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:29,848 p=605 u=mistral | TASK [Check previous deployment rc for CADeployment] *************************** >2018-10-02 10:40:29,849 p=605 u=mistral | Tuesday 02 October 2018 10:40:29 -0400 (0:00:00.291) 0:01:00.122 ******* >2018-10-02 10:40:29,881 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:29,911 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:29,925 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:29,955 p=605 u=mistral | TASK [Remove deployed file for CADeployment when previous deployment failed] *** >2018-10-02 10:40:29,955 p=605 u=mistral | Tuesday 02 October 2018 10:40:29 -0400 (0:00:00.106) 0:01:00.228 ******* >2018-10-02 10:40:29,988 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:30,015 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:30,032 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:30,054 p=605 u=mistral | TASK [Force remove deployed file for CADeployment] ***************************** >2018-10-02 10:40:30,054 p=605 u=mistral | Tuesday 02 October 2018 10:40:30 -0400 (0:00:00.099) 0:01:00.327 ******* >2018-10-02 10:40:30,082 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:30,110 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:30,124 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:30,147 p=605 u=mistral | TASK [Run deployment CADeployment] ********************************************* >2018-10-02 10:40:30,147 p=605 u=mistral | Tuesday 02 October 2018 10:40:30 -0400 (0:00:00.092) 0:01:00.420 ******* >2018-10-02 10:40:31,472 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/79847964-2497-4372-8faf-757a6f98ded5.notify.json)", "delta": "0:00:01.126922", "end": "2018-10-02 10:40:31.436362", "rc": 0, "start": "2018-10-02 10:40:30.309440", "stderr": "[2018-10-02 10:40:30,338] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/79847964-2497-4372-8faf-757a6f98ded5.json\n[2018-10-02 10:40:31,010] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-10-02 10:40:31,010] (heat-config) [DEBUG] [2018-10-02 10:40:30,358] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-10-02 10:40:30,358] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ\n17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk\nEmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO\ncX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF\nSoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL\n/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O\nBBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o\nF8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL\ngT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX\nuUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9\nfkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny\nP8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh\nA3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy\nSju7PiEvw2a6evE=\n-----END CERTIFICATE-----\n[2018-10-02 10:40:30,358] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d\n[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-ovd2peurroaq-0-7fwquijmepyp-NodeTLSCAData-brwakzxdr72j-CADeployment-hiwehel5uvo4/f8d9cee2-6881-4427-a81c-46b5bc563e9a\n[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:40:30,358] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/79847964-2497-4372-8faf-757a6f98ded5\n[2018-10-02 10:40:31,006] (heat-config) [INFO] \n[2018-10-02 10:40:31,006] (heat-config) [DEBUG] \n[2018-10-02 10:40:31,006] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/79847964-2497-4372-8faf-757a6f98ded5\n\n[2018-10-02 10:40:31,010] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:31,011] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/79847964-2497-4372-8faf-757a6f98ded5.json < /var/lib/heat-config/deployed/79847964-2497-4372-8faf-757a6f98ded5.notify.json\n[2018-10-02 10:40:31,428] (heat-config) [INFO] \n[2018-10-02 10:40:31,429] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:30,338] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/79847964-2497-4372-8faf-757a6f98ded5.json", "[2018-10-02 10:40:31,010] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-10-02 10:40:31,010] (heat-config) [DEBUG] [2018-10-02 10:40:30,358] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-10-02 10:40:30,358] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", "Sju7PiEvw2a6evE=", "-----END CERTIFICATE-----", "[2018-10-02 10:40:30,358] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", "[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-ovd2peurroaq-0-7fwquijmepyp-NodeTLSCAData-brwakzxdr72j-CADeployment-hiwehel5uvo4/f8d9cee2-6881-4427-a81c-46b5bc563e9a", "[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:40:30,358] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/79847964-2497-4372-8faf-757a6f98ded5", "[2018-10-02 10:40:31,006] (heat-config) [INFO] ", "[2018-10-02 10:40:31,006] (heat-config) [DEBUG] ", "[2018-10-02 10:40:31,006] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/79847964-2497-4372-8faf-757a6f98ded5", "", "[2018-10-02 10:40:31,010] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:31,011] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/79847964-2497-4372-8faf-757a6f98ded5.json < /var/lib/heat-config/deployed/79847964-2497-4372-8faf-757a6f98ded5.notify.json", "[2018-10-02 10:40:31,428] (heat-config) [INFO] ", "[2018-10-02 10:40:31,429] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:31,582 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/97419a77-2bcc-4c19-9523-c1ead08bbf90.notify.json)", "delta": "0:00:01.168960", "end": "2018-10-02 10:40:31.555207", "rc": 0, "start": "2018-10-02 10:40:30.386247", "stderr": "[2018-10-02 10:40:30,414] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/97419a77-2bcc-4c19-9523-c1ead08bbf90.json\n[2018-10-02 10:40:31,145] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-10-02 10:40:31,145] (heat-config) [DEBUG] [2018-10-02 10:40:30,439] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-10-02 10:40:30,440] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ\n17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk\nEmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO\ncX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF\nSoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL\n/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O\nBBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o\nF8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL\ngT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX\nuUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9\nfkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny\nP8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh\nA3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy\nSju7PiEvw2a6evE=\n-----END CERTIFICATE-----\n[2018-10-02 10:40:30,440] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f\n[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-r7iuindp3fim-0-fcdemalfee52-NodeTLSCAData-hywv2blg732h-CADeployment-l2bqoyswlab5/97de12d1-e990-452f-abb1-feb95c63bc45\n[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:40:30,440] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/97419a77-2bcc-4c19-9523-c1ead08bbf90\n[2018-10-02 10:40:31,141] (heat-config) [INFO] \n[2018-10-02 10:40:31,141] (heat-config) [DEBUG] \n[2018-10-02 10:40:31,141] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/97419a77-2bcc-4c19-9523-c1ead08bbf90\n\n[2018-10-02 10:40:31,145] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:31,145] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/97419a77-2bcc-4c19-9523-c1ead08bbf90.json < /var/lib/heat-config/deployed/97419a77-2bcc-4c19-9523-c1ead08bbf90.notify.json\n[2018-10-02 10:40:31,549] (heat-config) [INFO] \n[2018-10-02 10:40:31,549] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:30,414] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/97419a77-2bcc-4c19-9523-c1ead08bbf90.json", "[2018-10-02 10:40:31,145] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-10-02 10:40:31,145] (heat-config) [DEBUG] [2018-10-02 10:40:30,439] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-10-02 10:40:30,440] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", "Sju7PiEvw2a6evE=", "-----END CERTIFICATE-----", "[2018-10-02 10:40:30,440] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", "[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-r7iuindp3fim-0-fcdemalfee52-NodeTLSCAData-hywv2blg732h-CADeployment-l2bqoyswlab5/97de12d1-e990-452f-abb1-feb95c63bc45", "[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:40:30,440] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/97419a77-2bcc-4c19-9523-c1ead08bbf90", "[2018-10-02 10:40:31,141] (heat-config) [INFO] ", "[2018-10-02 10:40:31,141] (heat-config) [DEBUG] ", "[2018-10-02 10:40:31,141] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/97419a77-2bcc-4c19-9523-c1ead08bbf90", "", "[2018-10-02 10:40:31,145] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:31,145] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/97419a77-2bcc-4c19-9523-c1ead08bbf90.json < /var/lib/heat-config/deployed/97419a77-2bcc-4c19-9523-c1ead08bbf90.notify.json", "[2018-10-02 10:40:31,549] (heat-config) [INFO] ", "[2018-10-02 10:40:31,549] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:31,660 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a.notify.json)", "delta": "0:00:01.265656", "end": "2018-10-02 10:40:31.630974", "rc": 0, "start": "2018-10-02 10:40:30.365318", "stderr": "[2018-10-02 10:40:30,395] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a.json\n[2018-10-02 10:40:31,185] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-10-02 10:40:31,185] (heat-config) [DEBUG] [2018-10-02 10:40:30,421] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-10-02 10:40:30,421] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ\n17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk\nEmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO\ncX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF\nSoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL\n/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O\nBBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o\nF8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL\ngT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX\nuUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9\nfkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny\nP8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh\nA3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy\nSju7PiEvw2a6evE=\n-----END CERTIFICATE-----\n[2018-10-02 10:40:30,421] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6\n[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-il33pv3dy25e-0-lgbwge7jtszc-NodeTLSCAData-rd6gptzsxbsz-CADeployment-nprgeo5rcpa4/7e897744-7290-4d02-8c7d-bec4047fd739\n[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:40:30,422] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:40:30,422] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a\n[2018-10-02 10:40:31,180] (heat-config) [INFO] \n[2018-10-02 10:40:31,180] (heat-config) [DEBUG] \n[2018-10-02 10:40:31,180] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a\n\n[2018-10-02 10:40:31,185] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:31,186] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a.json < /var/lib/heat-config/deployed/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a.notify.json\n[2018-10-02 10:40:31,623] (heat-config) [INFO] \n[2018-10-02 10:40:31,623] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:30,395] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a.json", "[2018-10-02 10:40:31,185] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-10-02 10:40:31,185] (heat-config) [DEBUG] [2018-10-02 10:40:30,421] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-10-02 10:40:30,421] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", "Sju7PiEvw2a6evE=", "-----END CERTIFICATE-----", "[2018-10-02 10:40:30,421] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", "[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-il33pv3dy25e-0-lgbwge7jtszc-NodeTLSCAData-rd6gptzsxbsz-CADeployment-nprgeo5rcpa4/7e897744-7290-4d02-8c7d-bec4047fd739", "[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:40:30,422] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:40:30,422] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a", "[2018-10-02 10:40:31,180] (heat-config) [INFO] ", "[2018-10-02 10:40:31,180] (heat-config) [DEBUG] ", "[2018-10-02 10:40:31,180] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a", "", "[2018-10-02 10:40:31,185] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:31,186] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a.json < /var/lib/heat-config/deployed/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a.notify.json", "[2018-10-02 10:40:31,623] (heat-config) [INFO] ", "[2018-10-02 10:40:31,623] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:31,691 p=605 u=mistral | TASK [Output for CADeployment] ************************************************* >2018-10-02 10:40:31,691 p=605 u=mistral | Tuesday 02 October 2018 10:40:31 -0400 (0:00:01.544) 0:01:01.965 ******* >2018-10-02 10:40:31,763 p=605 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:30,338] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/79847964-2497-4372-8faf-757a6f98ded5.json", > "[2018-10-02 10:40:31,010] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-10-02 10:40:31,010] (heat-config) [DEBUG] [2018-10-02 10:40:30,358] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-10-02 10:40:30,358] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", > "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", > "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", > "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", > "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", > "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", > "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", > "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", > "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", > "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", > "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", > "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", > "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", > "Sju7PiEvw2a6evE=", > "-----END CERTIFICATE-----", > "[2018-10-02 10:40:30,358] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", > "[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-ovd2peurroaq-0-7fwquijmepyp-NodeTLSCAData-brwakzxdr72j-CADeployment-hiwehel5uvo4/f8d9cee2-6881-4427-a81c-46b5bc563e9a", > "[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:40:30,358] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:40:30,358] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/79847964-2497-4372-8faf-757a6f98ded5", > "[2018-10-02 10:40:31,006] (heat-config) [INFO] ", > "[2018-10-02 10:40:31,006] (heat-config) [DEBUG] ", > "[2018-10-02 10:40:31,006] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/79847964-2497-4372-8faf-757a6f98ded5", > "", > "[2018-10-02 10:40:31,010] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:31,011] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/79847964-2497-4372-8faf-757a6f98ded5.json < /var/lib/heat-config/deployed/79847964-2497-4372-8faf-757a6f98ded5.notify.json", > "[2018-10-02 10:40:31,428] (heat-config) [INFO] ", > "[2018-10-02 10:40:31,429] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:31,779 p=605 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:30,395] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a.json", > "[2018-10-02 10:40:31,185] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-10-02 10:40:31,185] (heat-config) [DEBUG] [2018-10-02 10:40:30,421] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-10-02 10:40:30,421] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", > "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", > "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", > "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", > "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", > "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", > "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", > "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", > "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", > "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", > "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", > "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", > "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", > "Sju7PiEvw2a6evE=", > "-----END CERTIFICATE-----", > "[2018-10-02 10:40:30,421] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", > "[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-il33pv3dy25e-0-lgbwge7jtszc-NodeTLSCAData-rd6gptzsxbsz-CADeployment-nprgeo5rcpa4/7e897744-7290-4d02-8c7d-bec4047fd739", > "[2018-10-02 10:40:30,421] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:40:30,422] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:40:30,422] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a", > "[2018-10-02 10:40:31,180] (heat-config) [INFO] ", > "[2018-10-02 10:40:31,180] (heat-config) [DEBUG] ", > "[2018-10-02 10:40:31,180] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a", > "", > "[2018-10-02 10:40:31,185] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:31,186] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a.json < /var/lib/heat-config/deployed/01b6a973-aeb7-40ef-a2a2-e8a7ed3e315a.notify.json", > "[2018-10-02 10:40:31,623] (heat-config) [INFO] ", > "[2018-10-02 10:40:31,623] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:31,812 p=605 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:30,414] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/97419a77-2bcc-4c19-9523-c1ead08bbf90.json", > "[2018-10-02 10:40:31,145] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"99a4126d3273e2effad6cc581c0808ef /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-10-02 10:40:31,145] (heat-config) [DEBUG] [2018-10-02 10:40:30,439] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-10-02 10:40:30,440] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJANlzFOYv2szeMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODEwMDIxMTI4MzJaFw0xOTEwMDIxMTI4MzJaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAL6PyYt0WwO0Q+ZaJKlAr1WOkm+FuquOy4rzJpbYCTVZ", > "17KTQfsYAEH1gYT2R6L9bSKwofM6M26YYdntvaMlK9U+pWo0dY6vUyl1TIr2G5bk", > "EmW1z0xHrzdtjRyclIRHXI/+Pg5+UzvMuYTeMzCLO+vAw04dhrO2IS4ENUFrnSUO", > "cX8dWhoXBf0na3dbxGMlUC9Y1a614a5tAG181S5Pi9mCHODdIPuqQdvQmm+tOxNF", > "SoSQSRispKASLLK94eew3qdU+9St5Q7iMF6noI/NiOoRkZcjb/1JjG5ETH6fXpFL", > "/7VHGiI6tKZVFlOenJWprIcvAMbJpSCH2YXJEkkCq88CAwEAAaNQME4wHQYDVR0O", > "BBYEFIIbpq4fasxmO08oF8gjZ6pk179SMB8GA1UdIwQYMBaAFIIbpq4fasxmO08o", > "F8gjZ6pk179SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABsXA1XL", > "gT+mp8jG7RyrLN6RaLOd3nWdAJT1SqwzpXu9d8t9e6m1wDawvR0dwaKelkhWasaX", > "uUZMFmAVHs5G6FnVCpFgogBqdNbBxY2mzH8vDmj7QriwFGRPFg9AV9Qk3BFkNRO9", > "fkyFm/8AfPuLRLWfdE9ffYFIS0/I70+D5c7JZhFj9j1n2Q6z3UbVRddgP/PwM1Ny", > "P8RhBRYVFoBDCmG0e1x/t3IRogAp80kT6sNzLfHjD2/M/LCegIozjbKZTuaimtEh", > "A3dbUG+ZFSVaVHxl+lNsXshvcsTFvfFJi/GzbNLsPv7FyUTQSaN18YgrU147Uohy", > "Sju7PiEvw2a6evE=", > "-----END CERTIFICATE-----", > "[2018-10-02 10:40:30,440] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", > "[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-r7iuindp3fim-0-fcdemalfee52-NodeTLSCAData-hywv2blg732h-CADeployment-l2bqoyswlab5/97de12d1-e990-452f-abb1-feb95c63bc45", > "[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:40:30,440] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:40:30,440] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/97419a77-2bcc-4c19-9523-c1ead08bbf90", > "[2018-10-02 10:40:31,141] (heat-config) [INFO] ", > "[2018-10-02 10:40:31,141] (heat-config) [DEBUG] ", > "[2018-10-02 10:40:31,141] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/97419a77-2bcc-4c19-9523-c1ead08bbf90", > "", > "[2018-10-02 10:40:31,145] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:31,145] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/97419a77-2bcc-4c19-9523-c1ead08bbf90.json < /var/lib/heat-config/deployed/97419a77-2bcc-4c19-9523-c1ead08bbf90.notify.json", > "[2018-10-02 10:40:31,549] (heat-config) [INFO] ", > "[2018-10-02 10:40:31,549] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:31,843 p=605 u=mistral | TASK [Check-mode for Run deployment CADeployment (changed status indicates deployment would run)] *** >2018-10-02 10:40:31,843 p=605 u=mistral | Tuesday 02 October 2018 10:40:31 -0400 (0:00:00.151) 0:01:02.116 ******* >2018-10-02 10:40:31,874 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:31,905 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:31,915 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:31,942 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:40:31,942 p=605 u=mistral | Tuesday 02 October 2018 10:40:31 -0400 (0:00:00.099) 0:01:02.215 ******* >2018-10-02 10:40:32,382 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528"}, "changed": false} >2018-10-02 10:40:32,407 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:40:32,407 p=605 u=mistral | Tuesday 02 October 2018 10:40:32 -0400 (0:00:00.464) 0:01:02.680 ******* >2018-10-02 10:40:32,870 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 10:40:32,896 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:40:32,896 p=605 u=mistral | Tuesday 02 October 2018 10:40:32 -0400 (0:00:00.489) 0:01:03.170 ******* >2018-10-02 10:40:32,918 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:32,944 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:40:32,945 p=605 u=mistral | Tuesday 02 October 2018 10:40:32 -0400 (0:00:00.048) 0:01:03.218 ******* >2018-10-02 10:40:32,965 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:32,992 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:40:32,993 p=605 u=mistral | Tuesday 02 October 2018 10:40:32 -0400 (0:00:00.048) 0:01:03.266 ******* >2018-10-02 10:40:33,014 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:33,041 p=605 u=mistral | TASK [Render deployment file for ControllerDeployment for check-mode] ********** >2018-10-02 10:40:33,042 p=605 u=mistral | Tuesday 02 October 2018 10:40:33 -0400 (0:00:00.048) 0:01:03.315 ******* >2018-10-02 10:40:33,061 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:33,086 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:40:33,086 p=605 u=mistral | Tuesday 02 October 2018 10:40:33 -0400 (0:00:00.044) 0:01:03.360 ******* >2018-10-02 10:40:33,105 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:33,132 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:40:33,132 p=605 u=mistral | Tuesday 02 October 2018 10:40:33 -0400 (0:00:00.045) 0:01:03.405 ******* >2018-10-02 10:40:33,150 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:33,177 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:33,177 p=605 u=mistral | Tuesday 02 October 2018 10:40:33 -0400 (0:00:00.044) 0:01:03.450 ******* >2018-10-02 10:40:33,206 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:33,234 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:33,234 p=605 u=mistral | Tuesday 02 October 2018 10:40:33 -0400 (0:00:00.057) 0:01:03.508 ******* >2018-10-02 10:40:33,261 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:33,287 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:40:33,287 p=605 u=mistral | Tuesday 02 October 2018 10:40:33 -0400 (0:00:00.052) 0:01:03.560 ******* >2018-10-02 10:40:33,307 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:33,332 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:40:33,332 p=605 u=mistral | Tuesday 02 October 2018 10:40:33 -0400 (0:00:00.044) 0:01:03.605 ******* >2018-10-02 10:40:33,352 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:33,382 p=605 u=mistral | TASK [Render deployment file for ControllerDeployment] ************************* >2018-10-02 10:40:33,382 p=605 u=mistral | Tuesday 02 October 2018 10:40:33 -0400 (0:00:00.049) 0:01:03.655 ******* >2018-10-02 10:40:34,433 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "1fd578991d764bd8fd26c7d782c7d9f9da7f8b0d", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerDeployment-16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528", "gid": 0, "group": "root", "md5sum": "d3bdff80e1b366541e7b26119b3fde9a", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 73841, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491233.87-79636017520849/source", "state": "file", "uid": 0} >2018-10-02 10:40:34,459 p=605 u=mistral | TASK [Check if deployed file exists for ControllerDeployment] ****************** >2018-10-02 10:40:34,459 p=605 u=mistral | Tuesday 02 October 2018 10:40:34 -0400 (0:00:01.077) 0:01:04.732 ******* >2018-10-02 10:40:34,665 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:34,696 p=605 u=mistral | TASK [Check previous deployment rc for ControllerDeployment] ******************* >2018-10-02 10:40:34,696 p=605 u=mistral | Tuesday 02 October 2018 10:40:34 -0400 (0:00:00.237) 0:01:04.969 ******* >2018-10-02 10:40:34,718 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:34,748 p=605 u=mistral | TASK [Remove deployed file for ControllerDeployment when previous deployment failed] *** >2018-10-02 10:40:34,748 p=605 u=mistral | Tuesday 02 October 2018 10:40:34 -0400 (0:00:00.051) 0:01:05.021 ******* >2018-10-02 10:40:34,772 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:34,801 p=605 u=mistral | TASK [Force remove deployed file for ControllerDeployment] ********************* >2018-10-02 10:40:34,801 p=605 u=mistral | Tuesday 02 October 2018 10:40:34 -0400 (0:00:00.053) 0:01:05.075 ******* >2018-10-02 10:40:34,820 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:34,849 p=605 u=mistral | TASK [Run deployment ControllerDeployment] ************************************* >2018-10-02 10:40:34,849 p=605 u=mistral | Tuesday 02 October 2018 10:40:34 -0400 (0:00:00.047) 0:01:05.123 ******* >2018-10-02 10:40:35,654 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528.notify.json)", "delta": "0:00:00.596992", "end": "2018-10-02 10:40:35.620012", "rc": 0, "start": "2018-10-02 10:40:35.023020", "stderr": "[2018-10-02 10:40:35,057] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528.json\n[2018-10-02 10:40:35,200] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:35,201] (heat-config) [DEBUG] \n[2018-10-02 10:40:35,201] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 10:40:35,201] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528.json < /var/lib/heat-config/deployed/16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528.notify.json\n[2018-10-02 10:40:35,612] (heat-config) [INFO] \n[2018-10-02 10:40:35,612] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:35,057] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528.json", "[2018-10-02 10:40:35,200] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:35,201] (heat-config) [DEBUG] ", "[2018-10-02 10:40:35,201] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 10:40:35,201] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528.json < /var/lib/heat-config/deployed/16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528.notify.json", "[2018-10-02 10:40:35,612] (heat-config) [INFO] ", "[2018-10-02 10:40:35,612] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:35,682 p=605 u=mistral | TASK [Output for ControllerDeployment] ***************************************** >2018-10-02 10:40:35,682 p=605 u=mistral | Tuesday 02 October 2018 10:40:35 -0400 (0:00:00.832) 0:01:05.955 ******* >2018-10-02 10:40:35,738 p=605 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:35,057] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528.json", > "[2018-10-02 10:40:35,200] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:35,201] (heat-config) [DEBUG] ", > "[2018-10-02 10:40:35,201] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 10:40:35,201] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528.json < /var/lib/heat-config/deployed/16ac6bc3-d7bd-4a2a-9ff0-277d3ffe3528.notify.json", > "[2018-10-02 10:40:35,612] (heat-config) [INFO] ", > "[2018-10-02 10:40:35,612] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:35,769 p=605 u=mistral | TASK [Check-mode for Run deployment ControllerDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:40:35,769 p=605 u=mistral | Tuesday 02 October 2018 10:40:35 -0400 (0:00:00.087) 0:01:06.043 ******* >2018-10-02 10:40:35,785 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:35,813 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:40:35,814 p=605 u=mistral | Tuesday 02 October 2018 10:40:35 -0400 (0:00:00.044) 0:01:06.087 ******* >2018-10-02 10:40:35,876 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "6ee1174a-a974-477c-b672-48ab1d893ad5"}, "changed": false} >2018-10-02 10:40:35,901 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:40:35,901 p=605 u=mistral | Tuesday 02 October 2018 10:40:35 -0400 (0:00:00.087) 0:01:06.175 ******* >2018-10-02 10:40:35,970 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:40:35,997 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:40:35,997 p=605 u=mistral | Tuesday 02 October 2018 10:40:35 -0400 (0:00:00.095) 0:01:06.270 ******* >2018-10-02 10:40:36,018 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:36,047 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:40:36,048 p=605 u=mistral | Tuesday 02 October 2018 10:40:36 -0400 (0:00:00.050) 0:01:06.321 ******* >2018-10-02 10:40:36,066 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:36,094 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:40:36,094 p=605 u=mistral | Tuesday 02 October 2018 10:40:36 -0400 (0:00:00.046) 0:01:06.367 ******* >2018-10-02 10:40:36,114 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:36,143 p=605 u=mistral | TASK [Render deployment file for ControllerHostsDeployment for check-mode] ***** >2018-10-02 10:40:36,143 p=605 u=mistral | Tuesday 02 October 2018 10:40:36 -0400 (0:00:00.049) 0:01:06.416 ******* >2018-10-02 10:40:36,163 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:36,194 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:40:36,195 p=605 u=mistral | Tuesday 02 October 2018 10:40:36 -0400 (0:00:00.051) 0:01:06.468 ******* >2018-10-02 10:40:36,215 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:36,243 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:40:36,243 p=605 u=mistral | Tuesday 02 October 2018 10:40:36 -0400 (0:00:00.048) 0:01:06.516 ******* >2018-10-02 10:40:36,261 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:36,288 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:36,288 p=605 u=mistral | Tuesday 02 October 2018 10:40:36 -0400 (0:00:00.044) 0:01:06.561 ******* >2018-10-02 10:40:36,311 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:36,338 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:36,338 p=605 u=mistral | Tuesday 02 October 2018 10:40:36 -0400 (0:00:00.050) 0:01:06.611 ******* >2018-10-02 10:40:36,359 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:36,387 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:40:36,387 p=605 u=mistral | Tuesday 02 October 2018 10:40:36 -0400 (0:00:00.049) 0:01:06.660 ******* >2018-10-02 10:40:36,406 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:36,432 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:40:36,432 p=605 u=mistral | Tuesday 02 October 2018 10:40:36 -0400 (0:00:00.045) 0:01:06.705 ******* >2018-10-02 10:40:36,450 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:36,476 p=605 u=mistral | TASK [Render deployment file for ControllerHostsDeployment] ******************** >2018-10-02 10:40:36,477 p=605 u=mistral | Tuesday 02 October 2018 10:40:36 -0400 (0:00:00.044) 0:01:06.750 ******* >2018-10-02 10:40:37,029 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "74319b8f20556c47994ee087061ad80a8306ee91", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostsDeployment-6ee1174a-a974-477c-b672-48ab1d893ad5", "gid": 0, "group": "root", "md5sum": "548d8f868feb5a69bf49d46452b902b4", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4429, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491236.54-264262311425326/source", "state": "file", "uid": 0} >2018-10-02 10:40:37,060 p=605 u=mistral | TASK [Check if deployed file exists for ControllerHostsDeployment] ************* >2018-10-02 10:40:37,060 p=605 u=mistral | Tuesday 02 October 2018 10:40:37 -0400 (0:00:00.583) 0:01:07.333 ******* >2018-10-02 10:40:37,264 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:37,294 p=605 u=mistral | TASK [Check previous deployment rc for ControllerHostsDeployment] ************** >2018-10-02 10:40:37,294 p=605 u=mistral | Tuesday 02 October 2018 10:40:37 -0400 (0:00:00.234) 0:01:07.567 ******* >2018-10-02 10:40:37,315 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:37,345 p=605 u=mistral | TASK [Remove deployed file for ControllerHostsDeployment when previous deployment failed] *** >2018-10-02 10:40:37,345 p=605 u=mistral | Tuesday 02 October 2018 10:40:37 -0400 (0:00:00.051) 0:01:07.618 ******* >2018-10-02 10:40:37,366 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:37,395 p=605 u=mistral | TASK [Force remove deployed file for ControllerHostsDeployment] **************** >2018-10-02 10:40:37,395 p=605 u=mistral | Tuesday 02 October 2018 10:40:37 -0400 (0:00:00.050) 0:01:07.669 ******* >2018-10-02 10:40:37,414 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:37,444 p=605 u=mistral | TASK [Run deployment ControllerHostsDeployment] ******************************** >2018-10-02 10:40:37,444 p=605 u=mistral | Tuesday 02 October 2018 10:40:37 -0400 (0:00:00.048) 0:01:07.717 ******* >2018-10-02 10:40:38,193 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/6ee1174a-a974-477c-b672-48ab1d893ad5.notify.json)", "delta": "0:00:00.506352", "end": "2018-10-02 10:40:38.126584", "rc": 0, "start": "2018-10-02 10:40:37.620232", "stderr": "[2018-10-02 10:40:37,647] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/6ee1174a-a974-477c-b672-48ab1d893ad5.json\n[2018-10-02 10:40:37,705] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:37,706] (heat-config) [DEBUG] [2018-10-02 10:40:37,671] (heat-config) [INFO] hosts=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d\n[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-q7azss4qwsma-0-7i6ctd5eqlqv/ccf93adc-3b54-43d7-9faf-f72a0baf6d6c\n[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:40:37,672] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/6ee1174a-a974-477c-b672-48ab1d893ad5\n[2018-10-02 10:40:37,701] (heat-config) [INFO] \n[2018-10-02 10:40:37,701] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /controller-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-10-02 10:40:37,701] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/6ee1174a-a974-477c-b672-48ab1d893ad5\n\n[2018-10-02 10:40:37,706] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:37,707] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6ee1174a-a974-477c-b672-48ab1d893ad5.json < /var/lib/heat-config/deployed/6ee1174a-a974-477c-b672-48ab1d893ad5.notify.json\n[2018-10-02 10:40:38,120] (heat-config) [INFO] \n[2018-10-02 10:40:38,120] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:37,647] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/6ee1174a-a974-477c-b672-48ab1d893ad5.json", "[2018-10-02 10:40:37,705] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:37,706] (heat-config) [DEBUG] [2018-10-02 10:40:37,671] (heat-config) [INFO] hosts=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", "[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-q7azss4qwsma-0-7i6ctd5eqlqv/ccf93adc-3b54-43d7-9faf-f72a0baf6d6c", "[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:40:37,672] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/6ee1174a-a974-477c-b672-48ab1d893ad5", "[2018-10-02 10:40:37,701] (heat-config) [INFO] ", "[2018-10-02 10:40:37,701] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /controller-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-10-02 10:40:37,701] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/6ee1174a-a974-477c-b672-48ab1d893ad5", "", "[2018-10-02 10:40:37,706] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:37,707] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6ee1174a-a974-477c-b672-48ab1d893ad5.json < /var/lib/heat-config/deployed/6ee1174a-a974-477c-b672-48ab1d893ad5.notify.json", "[2018-10-02 10:40:38,120] (heat-config) [INFO] ", "[2018-10-02 10:40:38,120] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:38,239 p=605 u=mistral | TASK [Output for ControllerHostsDeployment] ************************************ >2018-10-02 10:40:38,239 p=605 u=mistral | Tuesday 02 October 2018 10:40:38 -0400 (0:00:00.795) 0:01:08.512 ******* >2018-10-02 10:40:38,322 p=605 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:37,647] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/6ee1174a-a974-477c-b672-48ab1d893ad5.json", > "[2018-10-02 10:40:37,705] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:37,706] (heat-config) [DEBUG] [2018-10-02 10:40:37,671] (heat-config) [INFO] hosts=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", > "[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-q7azss4qwsma-0-7i6ctd5eqlqv/ccf93adc-3b54-43d7-9faf-f72a0baf6d6c", > "[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:40:37,672] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:40:37,672] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/6ee1174a-a974-477c-b672-48ab1d893ad5", > "[2018-10-02 10:40:37,701] (heat-config) [INFO] ", > "[2018-10-02 10:40:37,701] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-10-02 10:40:37,701] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/6ee1174a-a974-477c-b672-48ab1d893ad5", > "", > "[2018-10-02 10:40:37,706] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:37,707] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6ee1174a-a974-477c-b672-48ab1d893ad5.json < /var/lib/heat-config/deployed/6ee1174a-a974-477c-b672-48ab1d893ad5.notify.json", > "[2018-10-02 10:40:38,120] (heat-config) [INFO] ", > "[2018-10-02 10:40:38,120] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:38,371 p=605 u=mistral | TASK [Check-mode for Run deployment ControllerHostsDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:40:38,372 p=605 u=mistral | Tuesday 02 October 2018 10:40:38 -0400 (0:00:00.132) 0:01:08.645 ******* >2018-10-02 10:40:38,390 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:38,420 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:40:38,420 p=605 u=mistral | Tuesday 02 October 2018 10:40:38 -0400 (0:00:00.048) 0:01:08.693 ******* >2018-10-02 10:40:38,598 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "181cb89a-c5f2-4c40-8707-ae760a59b000"}, "changed": false} >2018-10-02 10:40:38,626 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:40:38,626 p=605 u=mistral | Tuesday 02 October 2018 10:40:38 -0400 (0:00:00.206) 0:01:08.899 ******* >2018-10-02 10:40:38,807 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 10:40:38,836 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:40:38,836 p=605 u=mistral | Tuesday 02 October 2018 10:40:38 -0400 (0:00:00.210) 0:01:09.109 ******* >2018-10-02 10:40:38,861 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:38,892 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:40:38,892 p=605 u=mistral | Tuesday 02 October 2018 10:40:38 -0400 (0:00:00.055) 0:01:09.165 ******* >2018-10-02 10:40:38,913 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:38,942 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:40:38,942 p=605 u=mistral | Tuesday 02 October 2018 10:40:38 -0400 (0:00:00.049) 0:01:09.215 ******* >2018-10-02 10:40:38,964 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:38,994 p=605 u=mistral | TASK [Render deployment file for ControllerAllNodesDeployment for check-mode] *** >2018-10-02 10:40:38,995 p=605 u=mistral | Tuesday 02 October 2018 10:40:38 -0400 (0:00:00.052) 0:01:09.268 ******* >2018-10-02 10:40:39,017 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:39,046 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:40:39,046 p=605 u=mistral | Tuesday 02 October 2018 10:40:39 -0400 (0:00:00.051) 0:01:09.319 ******* >2018-10-02 10:40:39,068 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:39,094 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:40:39,095 p=605 u=mistral | Tuesday 02 October 2018 10:40:39 -0400 (0:00:00.048) 0:01:09.368 ******* >2018-10-02 10:40:39,116 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:39,144 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:39,144 p=605 u=mistral | Tuesday 02 October 2018 10:40:39 -0400 (0:00:00.049) 0:01:09.417 ******* >2018-10-02 10:40:39,167 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:39,195 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:39,195 p=605 u=mistral | Tuesday 02 October 2018 10:40:39 -0400 (0:00:00.050) 0:01:09.468 ******* >2018-10-02 10:40:39,219 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:39,249 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:40:39,249 p=605 u=mistral | Tuesday 02 October 2018 10:40:39 -0400 (0:00:00.053) 0:01:09.522 ******* >2018-10-02 10:40:39,269 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:39,297 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:40:39,297 p=605 u=mistral | Tuesday 02 October 2018 10:40:39 -0400 (0:00:00.048) 0:01:09.570 ******* >2018-10-02 10:40:39,316 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:39,349 p=605 u=mistral | TASK [Render deployment file for ControllerAllNodesDeployment] ***************** >2018-10-02 10:40:39,350 p=605 u=mistral | Tuesday 02 October 2018 10:40:39 -0400 (0:00:00.052) 0:01:09.623 ******* >2018-10-02 10:40:40,024 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "2328bf3a94a99ffe17314a335b9742fb7ac22009", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesDeployment-181cb89a-c5f2-4c40-8707-ae760a59b000", "gid": 0, "group": "root", "md5sum": "35296e15b5bdec555f2195d30a3090ae", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19544, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491239.53-200567378747594/source", "state": "file", "uid": 0} >2018-10-02 10:40:40,052 p=605 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesDeployment] ********** >2018-10-02 10:40:40,052 p=605 u=mistral | Tuesday 02 October 2018 10:40:40 -0400 (0:00:00.702) 0:01:10.325 ******* >2018-10-02 10:40:40,256 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:40,284 p=605 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesDeployment] *********** >2018-10-02 10:40:40,284 p=605 u=mistral | Tuesday 02 October 2018 10:40:40 -0400 (0:00:00.232) 0:01:10.558 ******* >2018-10-02 10:40:40,305 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:40,331 p=605 u=mistral | TASK [Remove deployed file for ControllerAllNodesDeployment when previous deployment failed] *** >2018-10-02 10:40:40,332 p=605 u=mistral | Tuesday 02 October 2018 10:40:40 -0400 (0:00:00.047) 0:01:10.605 ******* >2018-10-02 10:40:40,354 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:40,381 p=605 u=mistral | TASK [Force remove deployed file for ControllerAllNodesDeployment] ************* >2018-10-02 10:40:40,382 p=605 u=mistral | Tuesday 02 October 2018 10:40:40 -0400 (0:00:00.049) 0:01:10.655 ******* >2018-10-02 10:40:40,404 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:40,433 p=605 u=mistral | TASK [Run deployment ControllerAllNodesDeployment] ***************************** >2018-10-02 10:40:40,433 p=605 u=mistral | Tuesday 02 October 2018 10:40:40 -0400 (0:00:00.051) 0:01:10.706 ******* >2018-10-02 10:40:41,240 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/181cb89a-c5f2-4c40-8707-ae760a59b000.notify.json)", "delta": "0:00:00.595528", "end": "2018-10-02 10:40:41.204943", "rc": 0, "start": "2018-10-02 10:40:40.609415", "stderr": "[2018-10-02 10:40:40,638] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/181cb89a-c5f2-4c40-8707-ae760a59b000.json\n[2018-10-02 10:40:40,774] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:40,775] (heat-config) [DEBUG] \n[2018-10-02 10:40:40,775] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 10:40:40,775] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/181cb89a-c5f2-4c40-8707-ae760a59b000.json < /var/lib/heat-config/deployed/181cb89a-c5f2-4c40-8707-ae760a59b000.notify.json\n[2018-10-02 10:40:41,198] (heat-config) [INFO] \n[2018-10-02 10:40:41,198] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:40,638] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/181cb89a-c5f2-4c40-8707-ae760a59b000.json", "[2018-10-02 10:40:40,774] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:40,775] (heat-config) [DEBUG] ", "[2018-10-02 10:40:40,775] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 10:40:40,775] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/181cb89a-c5f2-4c40-8707-ae760a59b000.json < /var/lib/heat-config/deployed/181cb89a-c5f2-4c40-8707-ae760a59b000.notify.json", "[2018-10-02 10:40:41,198] (heat-config) [INFO] ", "[2018-10-02 10:40:41,198] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:41,270 p=605 u=mistral | TASK [Output for ControllerAllNodesDeployment] ********************************* >2018-10-02 10:40:41,270 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.837) 0:01:11.543 ******* >2018-10-02 10:40:41,328 p=605 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:40,638] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/181cb89a-c5f2-4c40-8707-ae760a59b000.json", > "[2018-10-02 10:40:40,774] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:40,775] (heat-config) [DEBUG] ", > "[2018-10-02 10:40:40,775] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 10:40:40,775] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/181cb89a-c5f2-4c40-8707-ae760a59b000.json < /var/lib/heat-config/deployed/181cb89a-c5f2-4c40-8707-ae760a59b000.notify.json", > "[2018-10-02 10:40:41,198] (heat-config) [INFO] ", > "[2018-10-02 10:40:41,198] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:41,357 p=605 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:40:41,357 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.086) 0:01:11.630 ******* >2018-10-02 10:40:41,374 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:41,401 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:40:41,401 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.043) 0:01:11.674 ******* >2018-10-02 10:40:41,466 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "219f11e9-e86d-4bbe-8998-52784a0cf9c8"}, "changed": false} >2018-10-02 10:40:41,493 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:40:41,493 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.092) 0:01:11.766 ******* >2018-10-02 10:40:41,562 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:40:41,590 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:40:41,590 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.096) 0:01:11.863 ******* >2018-10-02 10:40:41,615 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:41,647 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:40:41,647 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.056) 0:01:11.920 ******* >2018-10-02 10:40:41,668 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:41,696 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:40:41,696 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.049) 0:01:11.969 ******* >2018-10-02 10:40:41,717 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:41,747 p=605 u=mistral | TASK [Render deployment file for ControllerAllNodesValidationDeployment for check-mode] *** >2018-10-02 10:40:41,747 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.050) 0:01:12.020 ******* >2018-10-02 10:40:41,770 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:41,793 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:40:41,793 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.046) 0:01:12.066 ******* >2018-10-02 10:40:41,811 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:41,838 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:40:41,838 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.045) 0:01:12.112 ******* >2018-10-02 10:40:41,858 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:41,884 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:41,884 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.045) 0:01:12.158 ******* >2018-10-02 10:40:41,905 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:41,932 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:41,932 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.047) 0:01:12.206 ******* >2018-10-02 10:40:41,956 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:41,981 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:40:41,981 p=605 u=mistral | Tuesday 02 October 2018 10:40:41 -0400 (0:00:00.048) 0:01:12.254 ******* >2018-10-02 10:40:41,999 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:42,024 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:40:42,024 p=605 u=mistral | Tuesday 02 October 2018 10:40:42 -0400 (0:00:00.043) 0:01:12.297 ******* >2018-10-02 10:40:42,042 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:42,073 p=605 u=mistral | TASK [Render deployment file for ControllerAllNodesValidationDeployment] ******* >2018-10-02 10:40:42,073 p=605 u=mistral | Tuesday 02 October 2018 10:40:42 -0400 (0:00:00.049) 0:01:12.347 ******* >2018-10-02 10:40:42,638 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "8242aabbea3794c1d42b7acecf38aefaaa887fe9", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesValidationDeployment-219f11e9-e86d-4bbe-8998-52784a0cf9c8", "gid": 0, "group": "root", "md5sum": "8ae7a9a686a73d84e33c04713f2c440b", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4941, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491242.14-34801048879711/source", "state": "file", "uid": 0} >2018-10-02 10:40:42,668 p=605 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesValidationDeployment] *** >2018-10-02 10:40:42,668 p=605 u=mistral | Tuesday 02 October 2018 10:40:42 -0400 (0:00:00.594) 0:01:12.941 ******* >2018-10-02 10:40:42,868 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:42,896 p=605 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesValidationDeployment] *** >2018-10-02 10:40:42,897 p=605 u=mistral | Tuesday 02 October 2018 10:40:42 -0400 (0:00:00.228) 0:01:13.170 ******* >2018-10-02 10:40:42,915 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:42,940 p=605 u=mistral | TASK [Remove deployed file for ControllerAllNodesValidationDeployment when previous deployment failed] *** >2018-10-02 10:40:42,940 p=605 u=mistral | Tuesday 02 October 2018 10:40:42 -0400 (0:00:00.043) 0:01:13.213 ******* >2018-10-02 10:40:42,960 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:42,984 p=605 u=mistral | TASK [Force remove deployed file for ControllerAllNodesValidationDeployment] *** >2018-10-02 10:40:42,984 p=605 u=mistral | Tuesday 02 October 2018 10:40:42 -0400 (0:00:00.043) 0:01:13.257 ******* >2018-10-02 10:40:43,000 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:43,026 p=605 u=mistral | TASK [Run deployment ControllerAllNodesValidationDeployment] ******************* >2018-10-02 10:40:43,026 p=605 u=mistral | Tuesday 02 October 2018 10:40:43 -0400 (0:00:00.041) 0:01:13.299 ******* >2018-10-02 10:40:44,540 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/219f11e9-e86d-4bbe-8998-52784a0cf9c8.notify.json)", "delta": "0:00:01.237862", "end": "2018-10-02 10:40:44.504324", "rc": 0, "start": "2018-10-02 10:40:43.266462", "stderr": "[2018-10-02 10:40:43,293] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/219f11e9-e86d-4bbe-8998-52784a0cf9c8.json\n[2018-10-02 10:40:44,083] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.123 for local network 10.0.0.0/24.\\nPing to 10.0.0.123 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.14 for local network 172.17.1.0/24.\\nPing to 172.17.1.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\\nPing to 172.17.3.25 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.22 for local network 172.17.4.0/24.\\nPing to 172.17.4.22 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:44,083] (heat-config) [DEBUG] [2018-10-02 10:40:43,317] (heat-config) [INFO] ping_test_ips=172.17.3.25 172.17.4.22 172.17.1.14 172.17.2.12 10.0.0.123 192.168.24.12\n[2018-10-02 10:40:43,318] (heat-config) [INFO] validate_fqdn=False\n[2018-10-02 10:40:43,318] (heat-config) [INFO] validate_ntp=True\n[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d\n[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-pyq4m4szir3k-0-uvgxi62gomno/ab59c18d-9b37-4022-b78e-5d9b754a4156\n[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:40:43,318] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/219f11e9-e86d-4bbe-8998-52784a0cf9c8\n[2018-10-02 10:40:44,078] (heat-config) [INFO] Trying to ping 10.0.0.123 for local network 10.0.0.0/24.\nPing to 10.0.0.123 succeeded.\nSUCCESS\nTrying to ping 172.17.1.14 for local network 172.17.1.0/24.\nPing to 172.17.1.14 succeeded.\nSUCCESS\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\nPing to 172.17.2.12 succeeded.\nSUCCESS\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\nPing to 172.17.3.25 succeeded.\nSUCCESS\nTrying to ping 172.17.4.22 for local network 172.17.4.0/24.\nPing to 172.17.4.22 succeeded.\nSUCCESS\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\nPing to 192.168.24.12 succeeded.\nSUCCESS\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-10-02 10:40:44,078] (heat-config) [DEBUG] \n[2018-10-02 10:40:44,078] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/219f11e9-e86d-4bbe-8998-52784a0cf9c8\n\n[2018-10-02 10:40:44,083] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:44,083] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/219f11e9-e86d-4bbe-8998-52784a0cf9c8.json < /var/lib/heat-config/deployed/219f11e9-e86d-4bbe-8998-52784a0cf9c8.notify.json\n[2018-10-02 10:40:44,497] (heat-config) [INFO] \n[2018-10-02 10:40:44,497] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:43,293] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/219f11e9-e86d-4bbe-8998-52784a0cf9c8.json", "[2018-10-02 10:40:44,083] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.123 for local network 10.0.0.0/24.\\nPing to 10.0.0.123 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.14 for local network 172.17.1.0/24.\\nPing to 172.17.1.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\\nPing to 172.17.3.25 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.22 for local network 172.17.4.0/24.\\nPing to 172.17.4.22 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:44,083] (heat-config) [DEBUG] [2018-10-02 10:40:43,317] (heat-config) [INFO] ping_test_ips=172.17.3.25 172.17.4.22 172.17.1.14 172.17.2.12 10.0.0.123 192.168.24.12", "[2018-10-02 10:40:43,318] (heat-config) [INFO] validate_fqdn=False", "[2018-10-02 10:40:43,318] (heat-config) [INFO] validate_ntp=True", "[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", "[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-pyq4m4szir3k-0-uvgxi62gomno/ab59c18d-9b37-4022-b78e-5d9b754a4156", "[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:40:43,318] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/219f11e9-e86d-4bbe-8998-52784a0cf9c8", "[2018-10-02 10:40:44,078] (heat-config) [INFO] Trying to ping 10.0.0.123 for local network 10.0.0.0/24.", "Ping to 10.0.0.123 succeeded.", "SUCCESS", "Trying to ping 172.17.1.14 for local network 172.17.1.0/24.", "Ping to 172.17.1.14 succeeded.", "SUCCESS", "Trying to ping 172.17.2.12 for local network 172.17.2.0/24.", "Ping to 172.17.2.12 succeeded.", "SUCCESS", "Trying to ping 172.17.3.25 for local network 172.17.3.0/24.", "Ping to 172.17.3.25 succeeded.", "SUCCESS", "Trying to ping 172.17.4.22 for local network 172.17.4.0/24.", "Ping to 172.17.4.22 succeeded.", "SUCCESS", "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", "Ping to 192.168.24.12 succeeded.", "SUCCESS", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-10-02 10:40:44,078] (heat-config) [DEBUG] ", "[2018-10-02 10:40:44,078] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/219f11e9-e86d-4bbe-8998-52784a0cf9c8", "", "[2018-10-02 10:40:44,083] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:44,083] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/219f11e9-e86d-4bbe-8998-52784a0cf9c8.json < /var/lib/heat-config/deployed/219f11e9-e86d-4bbe-8998-52784a0cf9c8.notify.json", "[2018-10-02 10:40:44,497] (heat-config) [INFO] ", "[2018-10-02 10:40:44,497] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:44,567 p=605 u=mistral | TASK [Output for ControllerAllNodesValidationDeployment] *********************** >2018-10-02 10:40:44,567 p=605 u=mistral | Tuesday 02 October 2018 10:40:44 -0400 (0:00:01.541) 0:01:14.841 ******* >2018-10-02 10:40:44,693 p=605 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:43,293] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/219f11e9-e86d-4bbe-8998-52784a0cf9c8.json", > "[2018-10-02 10:40:44,083] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.123 for local network 10.0.0.0/24.\\nPing to 10.0.0.123 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.14 for local network 172.17.1.0/24.\\nPing to 172.17.1.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\\nPing to 172.17.3.25 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.22 for local network 172.17.4.0/24.\\nPing to 172.17.4.22 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:44,083] (heat-config) [DEBUG] [2018-10-02 10:40:43,317] (heat-config) [INFO] ping_test_ips=172.17.3.25 172.17.4.22 172.17.1.14 172.17.2.12 10.0.0.123 192.168.24.12", > "[2018-10-02 10:40:43,318] (heat-config) [INFO] validate_fqdn=False", > "[2018-10-02 10:40:43,318] (heat-config) [INFO] validate_ntp=True", > "[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", > "[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-pyq4m4szir3k-0-uvgxi62gomno/ab59c18d-9b37-4022-b78e-5d9b754a4156", > "[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:40:43,318] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:40:43,318] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/219f11e9-e86d-4bbe-8998-52784a0cf9c8", > "[2018-10-02 10:40:44,078] (heat-config) [INFO] Trying to ping 10.0.0.123 for local network 10.0.0.0/24.", > "Ping to 10.0.0.123 succeeded.", > "SUCCESS", > "Trying to ping 172.17.1.14 for local network 172.17.1.0/24.", > "Ping to 172.17.1.14 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.12 for local network 172.17.2.0/24.", > "Ping to 172.17.2.12 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.25 for local network 172.17.3.0/24.", > "Ping to 172.17.3.25 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.22 for local network 172.17.4.0/24.", > "Ping to 172.17.4.22 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", > "Ping to 192.168.24.12 succeeded.", > "SUCCESS", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-10-02 10:40:44,078] (heat-config) [DEBUG] ", > "[2018-10-02 10:40:44,078] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/219f11e9-e86d-4bbe-8998-52784a0cf9c8", > "", > "[2018-10-02 10:40:44,083] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:44,083] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/219f11e9-e86d-4bbe-8998-52784a0cf9c8.json < /var/lib/heat-config/deployed/219f11e9-e86d-4bbe-8998-52784a0cf9c8.notify.json", > "[2018-10-02 10:40:44,497] (heat-config) [INFO] ", > "[2018-10-02 10:40:44,497] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:44,724 p=605 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesValidationDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:40:44,725 p=605 u=mistral | Tuesday 02 October 2018 10:40:44 -0400 (0:00:00.157) 0:01:14.998 ******* >2018-10-02 10:40:44,741 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:44,769 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:40:44,769 p=605 u=mistral | Tuesday 02 October 2018 10:40:44 -0400 (0:00:00.044) 0:01:15.042 ******* >2018-10-02 10:40:44,906 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "70380c15-38f2-46c4-acce-fc030211028c"}, "changed": false} >2018-10-02 10:40:44,936 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:40:44,937 p=605 u=mistral | Tuesday 02 October 2018 10:40:44 -0400 (0:00:00.167) 0:01:15.210 ******* >2018-10-02 10:40:45,070 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:40:45,150 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:40:45,150 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.213) 0:01:15.423 ******* >2018-10-02 10:40:45,172 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:45,199 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:40:45,199 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.049) 0:01:15.473 ******* >2018-10-02 10:40:45,219 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:45,247 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:40:45,247 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.047) 0:01:15.520 ******* >2018-10-02 10:40:45,267 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:45,294 p=605 u=mistral | TASK [Render deployment file for ControllerArtifactsDeploy for check-mode] ***** >2018-10-02 10:40:45,294 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.047) 0:01:15.567 ******* >2018-10-02 10:40:45,315 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:45,339 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:40:45,340 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.045) 0:01:15.613 ******* >2018-10-02 10:40:45,359 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:45,383 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:40:45,384 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.043) 0:01:15.657 ******* >2018-10-02 10:40:45,403 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:45,431 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:45,431 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.047) 0:01:15.705 ******* >2018-10-02 10:40:45,455 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:45,481 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:45,481 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.049) 0:01:15.755 ******* >2018-10-02 10:40:45,503 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:45,528 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:40:45,528 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.046) 0:01:15.801 ******* >2018-10-02 10:40:45,546 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:45,575 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:40:45,575 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.046) 0:01:15.848 ******* >2018-10-02 10:40:45,594 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:45,623 p=605 u=mistral | TASK [Render deployment file for ControllerArtifactsDeploy] ******************** >2018-10-02 10:40:45,623 p=605 u=mistral | Tuesday 02 October 2018 10:40:45 -0400 (0:00:00.048) 0:01:15.897 ******* >2018-10-02 10:40:46,175 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "326c32232ab726cc57a39d8e0ea45605b131bc62", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerArtifactsDeploy-70380c15-38f2-46c4-acce-fc030211028c", "gid": 0, "group": "root", "md5sum": "7e723c6c5440810c8ddfb463e4ad318e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2021, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491245.69-133459035461152/source", "state": "file", "uid": 0} >2018-10-02 10:40:46,204 p=605 u=mistral | TASK [Check if deployed file exists for ControllerArtifactsDeploy] ************* >2018-10-02 10:40:46,204 p=605 u=mistral | Tuesday 02 October 2018 10:40:46 -0400 (0:00:00.580) 0:01:16.478 ******* >2018-10-02 10:40:46,406 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:46,434 p=605 u=mistral | TASK [Check previous deployment rc for ControllerArtifactsDeploy] ************** >2018-10-02 10:40:46,434 p=605 u=mistral | Tuesday 02 October 2018 10:40:46 -0400 (0:00:00.230) 0:01:16.708 ******* >2018-10-02 10:40:46,455 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:46,480 p=605 u=mistral | TASK [Remove deployed file for ControllerArtifactsDeploy when previous deployment failed] *** >2018-10-02 10:40:46,480 p=605 u=mistral | Tuesday 02 October 2018 10:40:46 -0400 (0:00:00.045) 0:01:16.754 ******* >2018-10-02 10:40:46,503 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:46,528 p=605 u=mistral | TASK [Force remove deployed file for ControllerArtifactsDeploy] **************** >2018-10-02 10:40:46,529 p=605 u=mistral | Tuesday 02 October 2018 10:40:46 -0400 (0:00:00.048) 0:01:16.802 ******* >2018-10-02 10:40:46,554 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:46,581 p=605 u=mistral | TASK [Run deployment ControllerArtifactsDeploy] ******************************** >2018-10-02 10:40:46,582 p=605 u=mistral | Tuesday 02 October 2018 10:40:46 -0400 (0:00:00.053) 0:01:16.855 ******* >2018-10-02 10:40:47,278 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/70380c15-38f2-46c4-acce-fc030211028c.notify.json)", "delta": "0:00:00.489608", "end": "2018-10-02 10:40:47.242928", "rc": 0, "start": "2018-10-02 10:40:46.753320", "stderr": "[2018-10-02 10:40:46,779] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/70380c15-38f2-46c4-acce-fc030211028c.json\n[2018-10-02 10:40:46,814] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:46,814] (heat-config) [DEBUG] [2018-10-02 10:40:46,803] (heat-config) [INFO] artifact_urls=\n[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d\n[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-kxw7gj7kfige-ControllerArtifactsDeploy-dbwpehfv75bl-0-fe7papsqibqn/bc721fde-7e9c-4f09-a56a-c278b56574d6\n[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:40:46,804] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/70380c15-38f2-46c4-acce-fc030211028c\n[2018-10-02 10:40:46,810] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-10-02 10:40:46,810] (heat-config) [DEBUG] \n[2018-10-02 10:40:46,810] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/70380c15-38f2-46c4-acce-fc030211028c\n\n[2018-10-02 10:40:46,814] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:46,814] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/70380c15-38f2-46c4-acce-fc030211028c.json < /var/lib/heat-config/deployed/70380c15-38f2-46c4-acce-fc030211028c.notify.json\n[2018-10-02 10:40:47,236] (heat-config) [INFO] \n[2018-10-02 10:40:47,236] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:46,779] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/70380c15-38f2-46c4-acce-fc030211028c.json", "[2018-10-02 10:40:46,814] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:46,814] (heat-config) [DEBUG] [2018-10-02 10:40:46,803] (heat-config) [INFO] artifact_urls=", "[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", "[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-kxw7gj7kfige-ControllerArtifactsDeploy-dbwpehfv75bl-0-fe7papsqibqn/bc721fde-7e9c-4f09-a56a-c278b56574d6", "[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:40:46,804] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/70380c15-38f2-46c4-acce-fc030211028c", "[2018-10-02 10:40:46,810] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-10-02 10:40:46,810] (heat-config) [DEBUG] ", "[2018-10-02 10:40:46,810] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/70380c15-38f2-46c4-acce-fc030211028c", "", "[2018-10-02 10:40:46,814] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:46,814] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/70380c15-38f2-46c4-acce-fc030211028c.json < /var/lib/heat-config/deployed/70380c15-38f2-46c4-acce-fc030211028c.notify.json", "[2018-10-02 10:40:47,236] (heat-config) [INFO] ", "[2018-10-02 10:40:47,236] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:47,309 p=605 u=mistral | TASK [Output for ControllerArtifactsDeploy] ************************************ >2018-10-02 10:40:47,309 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.727) 0:01:17.582 ******* >2018-10-02 10:40:47,369 p=605 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:46,779] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/70380c15-38f2-46c4-acce-fc030211028c.json", > "[2018-10-02 10:40:46,814] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:46,814] (heat-config) [DEBUG] [2018-10-02 10:40:46,803] (heat-config) [INFO] artifact_urls=", > "[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_server_id=101c7c7b-f1a8-4351-b993-c907d4f2794d", > "[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-kxw7gj7kfige-ControllerArtifactsDeploy-dbwpehfv75bl-0-fe7papsqibqn/bc721fde-7e9c-4f09-a56a-c278b56574d6", > "[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:40:46,803] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:40:46,804] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/70380c15-38f2-46c4-acce-fc030211028c", > "[2018-10-02 10:40:46,810] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-10-02 10:40:46,810] (heat-config) [DEBUG] ", > "[2018-10-02 10:40:46,810] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/70380c15-38f2-46c4-acce-fc030211028c", > "", > "[2018-10-02 10:40:46,814] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:46,814] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/70380c15-38f2-46c4-acce-fc030211028c.json < /var/lib/heat-config/deployed/70380c15-38f2-46c4-acce-fc030211028c.notify.json", > "[2018-10-02 10:40:47,236] (heat-config) [INFO] ", > "[2018-10-02 10:40:47,236] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:47,399 p=605 u=mistral | TASK [Check-mode for Run deployment ControllerArtifactsDeploy (changed status indicates deployment would run)] *** >2018-10-02 10:40:47,399 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.090) 0:01:17.672 ******* >2018-10-02 10:40:47,416 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:47,443 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:40:47,443 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.044) 0:01:17.717 ******* >2018-10-02 10:40:47,529 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "879f6f7f-c7c4-49af-807a-bafc463b387d"}, "changed": false} >2018-10-02 10:40:47,558 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:40:47,558 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.114) 0:01:17.831 ******* >2018-10-02 10:40:47,642 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_group": "ansible"}, "changed": false} >2018-10-02 10:40:47,670 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:40:47,670 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.111) 0:01:17.943 ******* >2018-10-02 10:40:47,689 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:47,716 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:40:47,717 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.046) 0:01:17.990 ******* >2018-10-02 10:40:47,740 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:47,770 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:40:47,770 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.053) 0:01:18.043 ******* >2018-10-02 10:40:47,790 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:47,820 p=605 u=mistral | TASK [Render deployment file for ControllerHostPrepDeployment for check-mode] *** >2018-10-02 10:40:47,820 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.050) 0:01:18.093 ******* >2018-10-02 10:40:47,840 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:47,869 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:40:47,869 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.049) 0:01:18.142 ******* >2018-10-02 10:40:47,890 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:47,917 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:40:47,917 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.047) 0:01:18.190 ******* >2018-10-02 10:40:47,934 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:47,960 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:47,960 p=605 u=mistral | Tuesday 02 October 2018 10:40:47 -0400 (0:00:00.043) 0:01:18.233 ******* >2018-10-02 10:40:47,981 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:48,006 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:48,006 p=605 u=mistral | Tuesday 02 October 2018 10:40:48 -0400 (0:00:00.045) 0:01:18.279 ******* >2018-10-02 10:40:48,026 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:48,053 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:40:48,054 p=605 u=mistral | Tuesday 02 October 2018 10:40:48 -0400 (0:00:00.047) 0:01:18.327 ******* >2018-10-02 10:40:48,071 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:48,098 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:40:48,098 p=605 u=mistral | Tuesday 02 October 2018 10:40:48 -0400 (0:00:00.044) 0:01:18.371 ******* >2018-10-02 10:40:48,116 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:40:48,145 p=605 u=mistral | TASK [Render deployment file for ControllerHostPrepDeployment] ***************** >2018-10-02 10:40:48,145 p=605 u=mistral | Tuesday 02 October 2018 10:40:48 -0400 (0:00:00.047) 0:01:18.419 ******* >2018-10-02 10:40:48,696 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "667c1a97e9317d20396c8a4e961d8a69242fa819", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostPrepDeployment-879f6f7f-c7c4-49af-807a-bafc463b387d", "gid": 0, "group": "root", "md5sum": "37d97755d7da7ba8780b7479d0c75bee", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21378, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491248.23-251227599964297/source", "state": "file", "uid": 0} >2018-10-02 10:40:48,724 p=605 u=mistral | TASK [Check if deployed file exists for ControllerHostPrepDeployment] ********** >2018-10-02 10:40:48,725 p=605 u=mistral | Tuesday 02 October 2018 10:40:48 -0400 (0:00:00.579) 0:01:18.998 ******* >2018-10-02 10:40:48,934 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:48,963 p=605 u=mistral | TASK [Check previous deployment rc for ControllerHostPrepDeployment] *********** >2018-10-02 10:40:48,963 p=605 u=mistral | Tuesday 02 October 2018 10:40:48 -0400 (0:00:00.238) 0:01:19.237 ******* >2018-10-02 10:40:48,983 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:49,011 p=605 u=mistral | TASK [Remove deployed file for ControllerHostPrepDeployment when previous deployment failed] *** >2018-10-02 10:40:49,011 p=605 u=mistral | Tuesday 02 October 2018 10:40:49 -0400 (0:00:00.047) 0:01:19.284 ******* >2018-10-02 10:40:49,032 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:49,061 p=605 u=mistral | TASK [Force remove deployed file for ControllerHostPrepDeployment] ************* >2018-10-02 10:40:49,061 p=605 u=mistral | Tuesday 02 October 2018 10:40:49 -0400 (0:00:00.049) 0:01:19.334 ******* >2018-10-02 10:40:49,080 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:49,108 p=605 u=mistral | TASK [Run deployment ControllerHostPrepDeployment] ***************************** >2018-10-02 10:40:49,108 p=605 u=mistral | Tuesday 02 October 2018 10:40:49 -0400 (0:00:00.047) 0:01:19.381 ******* >2018-10-02 10:40:55,849 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/879f6f7f-c7c4-49af-807a-bafc463b387d.notify.json)", "delta": "0:00:06.535819", "end": "2018-10-02 10:40:55.814593", "rc": 0, "start": "2018-10-02 10:40:49.278774", "stderr": "[2018-10-02 10:40:49,306] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/879f6f7f-c7c4-49af-807a-bafc463b387d.json\n[2018-10-02 10:40:55,388] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:55,389] (heat-config) [DEBUG] [2018-10-02 10:40:49,329] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/879f6f7f-c7c4-49af-807a-bafc463b387d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/879f6f7f-c7c4-49af-807a-bafc463b387d_variables.json\n[2018-10-02 10:40:55,384] (heat-config) [INFO] Return code 0\n[2018-10-02 10:40:55,384] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-10-02 10:40:55,384] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/879f6f7f-c7c4-49af-807a-bafc463b387d_playbook.yaml\n\n[2018-10-02 10:40:55,389] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-10-02 10:40:55,389] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/879f6f7f-c7c4-49af-807a-bafc463b387d.json < /var/lib/heat-config/deployed/879f6f7f-c7c4-49af-807a-bafc463b387d.notify.json\n[2018-10-02 10:40:55,807] (heat-config) [INFO] \n[2018-10-02 10:40:55,807] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:49,306] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/879f6f7f-c7c4-49af-807a-bafc463b387d.json", "[2018-10-02 10:40:55,388] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:55,389] (heat-config) [DEBUG] [2018-10-02 10:40:49,329] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/879f6f7f-c7c4-49af-807a-bafc463b387d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/879f6f7f-c7c4-49af-807a-bafc463b387d_variables.json", "[2018-10-02 10:40:55,384] (heat-config) [INFO] Return code 0", "[2018-10-02 10:40:55,384] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-10-02 10:40:55,384] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/879f6f7f-c7c4-49af-807a-bafc463b387d_playbook.yaml", "", "[2018-10-02 10:40:55,389] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-10-02 10:40:55,389] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/879f6f7f-c7c4-49af-807a-bafc463b387d.json < /var/lib/heat-config/deployed/879f6f7f-c7c4-49af-807a-bafc463b387d.notify.json", "[2018-10-02 10:40:55,807] (heat-config) [INFO] ", "[2018-10-02 10:40:55,807] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:55,879 p=605 u=mistral | TASK [Output for ControllerHostPrepDeployment] ********************************* >2018-10-02 10:40:55,879 p=605 u=mistral | Tuesday 02 October 2018 10:40:55 -0400 (0:00:06.770) 0:01:26.152 ******* >2018-10-02 10:40:55,936 p=605 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:49,306] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/879f6f7f-c7c4-49af-807a-bafc463b387d.json", > "[2018-10-02 10:40:55,388] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:55,389] (heat-config) [DEBUG] [2018-10-02 10:40:49,329] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/879f6f7f-c7c4-49af-807a-bafc463b387d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/879f6f7f-c7c4-49af-807a-bafc463b387d_variables.json", > "[2018-10-02 10:40:55,384] (heat-config) [INFO] Return code 0", > "[2018-10-02 10:40:55,384] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-10-02 10:40:55,384] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/879f6f7f-c7c4-49af-807a-bafc463b387d_playbook.yaml", > "", > "[2018-10-02 10:40:55,389] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-10-02 10:40:55,389] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/879f6f7f-c7c4-49af-807a-bafc463b387d.json < /var/lib/heat-config/deployed/879f6f7f-c7c4-49af-807a-bafc463b387d.notify.json", > "[2018-10-02 10:40:55,807] (heat-config) [INFO] ", > "[2018-10-02 10:40:55,807] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:55,969 p=605 u=mistral | TASK [Check-mode for Run deployment ControllerHostPrepDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:40:55,969 p=605 u=mistral | Tuesday 02 October 2018 10:40:55 -0400 (0:00:00.090) 0:01:26.242 ******* >2018-10-02 10:40:55,986 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:56,010 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:40:56,010 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.041) 0:01:26.284 ******* >2018-10-02 10:40:56,073 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "0ea46602-b26f-477e-905a-759022ede75e"}, "changed": false} >2018-10-02 10:40:56,096 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:40:56,096 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.085) 0:01:26.370 ******* >2018-10-02 10:40:56,163 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:40:56,187 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:40:56,187 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.090) 0:01:26.460 ******* >2018-10-02 10:40:56,209 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:56,232 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:40:56,232 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.044) 0:01:26.505 ******* >2018-10-02 10:40:56,253 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:56,277 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:40:56,277 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.045) 0:01:26.550 ******* >2018-10-02 10:40:56,298 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:56,321 p=605 u=mistral | TASK [Render deployment file for NovaComputeUpgradeInitDeployment for check-mode] *** >2018-10-02 10:40:56,322 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.044) 0:01:26.595 ******* >2018-10-02 10:40:56,341 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:56,365 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:40:56,365 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.043) 0:01:26.638 ******* >2018-10-02 10:40:56,384 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:56,408 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:40:56,408 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.042) 0:01:26.681 ******* >2018-10-02 10:40:56,429 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:56,452 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:56,452 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.044) 0:01:26.725 ******* >2018-10-02 10:40:56,475 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:56,499 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:56,500 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.047) 0:01:26.773 ******* >2018-10-02 10:40:56,523 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:40:56,546 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:40:56,546 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.046) 0:01:26.819 ******* >2018-10-02 10:40:56,569 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:56,590 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:40:56,590 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.043) 0:01:26.863 ******* >2018-10-02 10:40:56,621 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:40:56,640 p=605 u=mistral | TASK [Render deployment file for NovaComputeUpgradeInitDeployment] ************* >2018-10-02 10:40:56,641 p=605 u=mistral | Tuesday 02 October 2018 10:40:56 -0400 (0:00:00.050) 0:01:26.914 ******* >2018-10-02 10:40:57,158 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "d4a403ecec3f14d7cebf650bff9098d7ba368660", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeUpgradeInitDeployment-0ea46602-b26f-477e-905a-759022ede75e", "gid": 0, "group": "root", "md5sum": "83e3e1067caf73af145a53cee1fa5543", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1182, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491256.7-246991410940130/source", "state": "file", "uid": 0} >2018-10-02 10:40:57,183 p=605 u=mistral | TASK [Check if deployed file exists for NovaComputeUpgradeInitDeployment] ****** >2018-10-02 10:40:57,183 p=605 u=mistral | Tuesday 02 October 2018 10:40:57 -0400 (0:00:00.542) 0:01:27.456 ******* >2018-10-02 10:40:57,376 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:40:57,401 p=605 u=mistral | TASK [Check previous deployment rc for NovaComputeUpgradeInitDeployment] ******* >2018-10-02 10:40:57,401 p=605 u=mistral | Tuesday 02 October 2018 10:40:57 -0400 (0:00:00.218) 0:01:27.674 ******* >2018-10-02 10:40:57,421 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:57,445 p=605 u=mistral | TASK [Remove deployed file for NovaComputeUpgradeInitDeployment when previous deployment failed] *** >2018-10-02 10:40:57,445 p=605 u=mistral | Tuesday 02 October 2018 10:40:57 -0400 (0:00:00.044) 0:01:27.718 ******* >2018-10-02 10:40:57,468 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:57,492 p=605 u=mistral | TASK [Force remove deployed file for NovaComputeUpgradeInitDeployment] ********* >2018-10-02 10:40:57,492 p=605 u=mistral | Tuesday 02 October 2018 10:40:57 -0400 (0:00:00.047) 0:01:27.765 ******* >2018-10-02 10:40:57,512 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:57,536 p=605 u=mistral | TASK [Run deployment NovaComputeUpgradeInitDeployment] ************************* >2018-10-02 10:40:57,537 p=605 u=mistral | Tuesday 02 October 2018 10:40:57 -0400 (0:00:00.044) 0:01:27.810 ******* >2018-10-02 10:40:58,262 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/0ea46602-b26f-477e-905a-759022ede75e.notify.json)", "delta": "0:00:00.447112", "end": "2018-10-02 10:40:58.237860", "rc": 0, "start": "2018-10-02 10:40:57.790748", "stderr": "[2018-10-02 10:40:57,816] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0ea46602-b26f-477e-905a-759022ede75e.json\n[2018-10-02 10:40:57,845] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:40:57,846] (heat-config) [DEBUG] [2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6\n[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-il33pv3dy25e-0-lgbwge7jtszc-NovaComputeUpgradeInitDeployment-6lt2yojxtxqd/128b055d-e08b-40cd-a8dd-e6df2e847e83\n[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:40:57,839] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0ea46602-b26f-477e-905a-759022ede75e\n[2018-10-02 10:40:57,842] (heat-config) [INFO] \n[2018-10-02 10:40:57,842] (heat-config) [DEBUG] \n[2018-10-02 10:40:57,842] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0ea46602-b26f-477e-905a-759022ede75e\n\n[2018-10-02 10:40:57,846] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:40:57,846] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0ea46602-b26f-477e-905a-759022ede75e.json < /var/lib/heat-config/deployed/0ea46602-b26f-477e-905a-759022ede75e.notify.json\n[2018-10-02 10:40:58,231] (heat-config) [INFO] \n[2018-10-02 10:40:58,231] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:40:57,816] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0ea46602-b26f-477e-905a-759022ede75e.json", "[2018-10-02 10:40:57,845] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:40:57,846] (heat-config) [DEBUG] [2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", "[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-il33pv3dy25e-0-lgbwge7jtszc-NovaComputeUpgradeInitDeployment-6lt2yojxtxqd/128b055d-e08b-40cd-a8dd-e6df2e847e83", "[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:40:57,839] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0ea46602-b26f-477e-905a-759022ede75e", "[2018-10-02 10:40:57,842] (heat-config) [INFO] ", "[2018-10-02 10:40:57,842] (heat-config) [DEBUG] ", "[2018-10-02 10:40:57,842] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0ea46602-b26f-477e-905a-759022ede75e", "", "[2018-10-02 10:40:57,846] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:40:57,846] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0ea46602-b26f-477e-905a-759022ede75e.json < /var/lib/heat-config/deployed/0ea46602-b26f-477e-905a-759022ede75e.notify.json", "[2018-10-02 10:40:58,231] (heat-config) [INFO] ", "[2018-10-02 10:40:58,231] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:40:58,285 p=605 u=mistral | TASK [Output for NovaComputeUpgradeInitDeployment] ***************************** >2018-10-02 10:40:58,285 p=605 u=mistral | Tuesday 02 October 2018 10:40:58 -0400 (0:00:00.748) 0:01:28.558 ******* >2018-10-02 10:40:58,415 p=605 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:40:57,816] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0ea46602-b26f-477e-905a-759022ede75e.json", > "[2018-10-02 10:40:57,845] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:40:57,846] (heat-config) [DEBUG] [2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", > "[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-il33pv3dy25e-0-lgbwge7jtszc-NovaComputeUpgradeInitDeployment-6lt2yojxtxqd/128b055d-e08b-40cd-a8dd-e6df2e847e83", > "[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:40:57,838] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:40:57,839] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0ea46602-b26f-477e-905a-759022ede75e", > "[2018-10-02 10:40:57,842] (heat-config) [INFO] ", > "[2018-10-02 10:40:57,842] (heat-config) [DEBUG] ", > "[2018-10-02 10:40:57,842] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0ea46602-b26f-477e-905a-759022ede75e", > "", > "[2018-10-02 10:40:57,846] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:40:57,846] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0ea46602-b26f-477e-905a-759022ede75e.json < /var/lib/heat-config/deployed/0ea46602-b26f-477e-905a-759022ede75e.notify.json", > "[2018-10-02 10:40:58,231] (heat-config) [INFO] ", > "[2018-10-02 10:40:58,231] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:40:58,438 p=605 u=mistral | TASK [Check-mode for Run deployment NovaComputeUpgradeInitDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:40:58,438 p=605 u=mistral | Tuesday 02 October 2018 10:40:58 -0400 (0:00:00.153) 0:01:28.711 ******* >2018-10-02 10:40:58,455 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:58,477 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:40:58,478 p=605 u=mistral | Tuesday 02 October 2018 10:40:58 -0400 (0:00:00.039) 0:01:28.751 ******* >2018-10-02 10:40:58,707 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "c57f5c26-08f9-4529-846b-26fabda9210f"}, "changed": false} >2018-10-02 10:40:58,729 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:40:58,730 p=605 u=mistral | Tuesday 02 October 2018 10:40:58 -0400 (0:00:00.252) 0:01:29.003 ******* >2018-10-02 10:40:58,962 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 10:40:58,984 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:40:58,984 p=605 u=mistral | Tuesday 02 October 2018 10:40:58 -0400 (0:00:00.254) 0:01:29.257 ******* >2018-10-02 10:40:59,003 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:59,027 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:40:59,027 p=605 u=mistral | Tuesday 02 October 2018 10:40:59 -0400 (0:00:00.042) 0:01:29.300 ******* >2018-10-02 10:40:59,049 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:59,120 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:40:59,120 p=605 u=mistral | Tuesday 02 October 2018 10:40:59 -0400 (0:00:00.093) 0:01:29.393 ******* >2018-10-02 10:40:59,143 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:59,166 p=605 u=mistral | TASK [Render deployment file for NovaComputeDeployment for check-mode] ********* >2018-10-02 10:40:59,166 p=605 u=mistral | Tuesday 02 October 2018 10:40:59 -0400 (0:00:00.045) 0:01:29.439 ******* >2018-10-02 10:40:59,186 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:59,208 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:40:59,208 p=605 u=mistral | Tuesday 02 October 2018 10:40:59 -0400 (0:00:00.042) 0:01:29.482 ******* >2018-10-02 10:40:59,228 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:59,250 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:40:59,250 p=605 u=mistral | Tuesday 02 October 2018 10:40:59 -0400 (0:00:00.041) 0:01:29.523 ******* >2018-10-02 10:40:59,271 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:59,293 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:59,293 p=605 u=mistral | Tuesday 02 October 2018 10:40:59 -0400 (0:00:00.043) 0:01:29.567 ******* >2018-10-02 10:40:59,317 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:59,338 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:40:59,338 p=605 u=mistral | Tuesday 02 October 2018 10:40:59 -0400 (0:00:00.044) 0:01:29.612 ******* >2018-10-02 10:40:59,360 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:40:59,380 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:40:59,380 p=605 u=mistral | Tuesday 02 October 2018 10:40:59 -0400 (0:00:00.041) 0:01:29.653 ******* >2018-10-02 10:40:59,399 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:40:59,419 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:40:59,419 p=605 u=mistral | Tuesday 02 October 2018 10:40:59 -0400 (0:00:00.038) 0:01:29.692 ******* >2018-10-02 10:40:59,437 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:40:59,460 p=605 u=mistral | TASK [Render deployment file for NovaComputeDeployment] ************************ >2018-10-02 10:40:59,460 p=605 u=mistral | Tuesday 02 October 2018 10:40:59 -0400 (0:00:00.040) 0:01:29.733 ******* >2018-10-02 10:41:00,088 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "ebe27d919e18a858e8253d7fa65a8dedeeda7fa9", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeDeployment-c57f5c26-08f9-4529-846b-26fabda9210f", "gid": 0, "group": "root", "md5sum": "9735e61c6b1abf015f8092db58ff3607", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 22257, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491259.63-1917386277617/source", "state": "file", "uid": 0} >2018-10-02 10:41:00,109 p=605 u=mistral | TASK [Check if deployed file exists for NovaComputeDeployment] ***************** >2018-10-02 10:41:00,109 p=605 u=mistral | Tuesday 02 October 2018 10:41:00 -0400 (0:00:00.649) 0:01:30.382 ******* >2018-10-02 10:41:00,301 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:00,326 p=605 u=mistral | TASK [Check previous deployment rc for NovaComputeDeployment] ****************** >2018-10-02 10:41:00,326 p=605 u=mistral | Tuesday 02 October 2018 10:41:00 -0400 (0:00:00.217) 0:01:30.600 ******* >2018-10-02 10:41:00,346 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:00,370 p=605 u=mistral | TASK [Remove deployed file for NovaComputeDeployment when previous deployment failed] *** >2018-10-02 10:41:00,370 p=605 u=mistral | Tuesday 02 October 2018 10:41:00 -0400 (0:00:00.043) 0:01:30.643 ******* >2018-10-02 10:41:00,393 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:00,416 p=605 u=mistral | TASK [Force remove deployed file for NovaComputeDeployment] ******************** >2018-10-02 10:41:00,416 p=605 u=mistral | Tuesday 02 October 2018 10:41:00 -0400 (0:00:00.046) 0:01:30.689 ******* >2018-10-02 10:41:00,436 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:00,457 p=605 u=mistral | TASK [Run deployment NovaComputeDeployment] ************************************ >2018-10-02 10:41:00,458 p=605 u=mistral | Tuesday 02 October 2018 10:41:00 -0400 (0:00:00.041) 0:01:30.731 ******* >2018-10-02 10:41:01,226 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/c57f5c26-08f9-4529-846b-26fabda9210f.notify.json)", "delta": "0:00:00.568676", "end": "2018-10-02 10:41:01.201129", "rc": 0, "start": "2018-10-02 10:41:00.632453", "stderr": "[2018-10-02 10:41:00,660] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c57f5c26-08f9-4529-846b-26fabda9210f.json\n[2018-10-02 10:41:00,792] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:00,792] (heat-config) [DEBUG] \n[2018-10-02 10:41:00,792] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 10:41:00,792] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c57f5c26-08f9-4529-846b-26fabda9210f.json < /var/lib/heat-config/deployed/c57f5c26-08f9-4529-846b-26fabda9210f.notify.json\n[2018-10-02 10:41:01,195] (heat-config) [INFO] \n[2018-10-02 10:41:01,195] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:00,660] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c57f5c26-08f9-4529-846b-26fabda9210f.json", "[2018-10-02 10:41:00,792] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:00,792] (heat-config) [DEBUG] ", "[2018-10-02 10:41:00,792] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 10:41:00,792] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c57f5c26-08f9-4529-846b-26fabda9210f.json < /var/lib/heat-config/deployed/c57f5c26-08f9-4529-846b-26fabda9210f.notify.json", "[2018-10-02 10:41:01,195] (heat-config) [INFO] ", "[2018-10-02 10:41:01,195] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:01,250 p=605 u=mistral | TASK [Output for NovaComputeDeployment] **************************************** >2018-10-02 10:41:01,251 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.792) 0:01:31.524 ******* >2018-10-02 10:41:01,312 p=605 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:00,660] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c57f5c26-08f9-4529-846b-26fabda9210f.json", > "[2018-10-02 10:41:00,792] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:00,792] (heat-config) [DEBUG] ", > "[2018-10-02 10:41:00,792] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 10:41:00,792] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c57f5c26-08f9-4529-846b-26fabda9210f.json < /var/lib/heat-config/deployed/c57f5c26-08f9-4529-846b-26fabda9210f.notify.json", > "[2018-10-02 10:41:01,195] (heat-config) [INFO] ", > "[2018-10-02 10:41:01,195] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:01,338 p=605 u=mistral | TASK [Check-mode for Run deployment NovaComputeDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:01,338 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.087) 0:01:31.611 ******* >2018-10-02 10:41:01,354 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:01,376 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:01,376 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.038) 0:01:31.650 ******* >2018-10-02 10:41:01,444 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "731e5100-2d94-45cb-9be3-79a4bd5ea06d"}, "changed": false} >2018-10-02 10:41:01,466 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:01,467 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.090) 0:01:31.740 ******* >2018-10-02 10:41:01,532 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:41:01,553 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:01,553 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.086) 0:01:31.826 ******* >2018-10-02 10:41:01,575 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:01,596 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:01,596 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.042) 0:01:31.869 ******* >2018-10-02 10:41:01,614 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:01,634 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:01,634 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.038) 0:01:31.908 ******* >2018-10-02 10:41:01,655 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:01,679 p=605 u=mistral | TASK [Render deployment file for ComputeHostsDeployment for check-mode] ******** >2018-10-02 10:41:01,679 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.044) 0:01:31.952 ******* >2018-10-02 10:41:01,698 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:01,721 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:01,721 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.042) 0:01:31.994 ******* >2018-10-02 10:41:01,740 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:01,762 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:01,762 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.041) 0:01:32.036 ******* >2018-10-02 10:41:01,780 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:01,798 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:01,799 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.036) 0:01:32.072 ******* >2018-10-02 10:41:01,819 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:01,838 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:01,838 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.039) 0:01:32.112 ******* >2018-10-02 10:41:01,860 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:41:01,878 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:01,878 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.039) 0:01:32.152 ******* >2018-10-02 10:41:01,895 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:01,915 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:01,915 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.037) 0:01:32.189 ******* >2018-10-02 10:41:01,933 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:41:01,956 p=605 u=mistral | TASK [Render deployment file for ComputeHostsDeployment] *********************** >2018-10-02 10:41:01,956 p=605 u=mistral | Tuesday 02 October 2018 10:41:01 -0400 (0:00:00.040) 0:01:32.230 ******* >2018-10-02 10:41:02,480 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "8da519e58846ef33ac57d29675cc61c4ca9a90b2", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostsDeployment-731e5100-2d94-45cb-9be3-79a4bd5ea06d", "gid": 0, "group": "root", "md5sum": "a0f40c8cba9c346df81a3f2ee987fdca", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4423, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491262.02-49404859116420/source", "state": "file", "uid": 0} >2018-10-02 10:41:02,506 p=605 u=mistral | TASK [Check if deployed file exists for ComputeHostsDeployment] **************** >2018-10-02 10:41:02,506 p=605 u=mistral | Tuesday 02 October 2018 10:41:02 -0400 (0:00:00.549) 0:01:32.779 ******* >2018-10-02 10:41:02,704 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:02,729 p=605 u=mistral | TASK [Check previous deployment rc for ComputeHostsDeployment] ***************** >2018-10-02 10:41:02,729 p=605 u=mistral | Tuesday 02 October 2018 10:41:02 -0400 (0:00:00.223) 0:01:33.002 ******* >2018-10-02 10:41:02,750 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:02,773 p=605 u=mistral | TASK [Remove deployed file for ComputeHostsDeployment when previous deployment failed] *** >2018-10-02 10:41:02,774 p=605 u=mistral | Tuesday 02 October 2018 10:41:02 -0400 (0:00:00.044) 0:01:33.047 ******* >2018-10-02 10:41:02,795 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:02,819 p=605 u=mistral | TASK [Force remove deployed file for ComputeHostsDeployment] ******************* >2018-10-02 10:41:02,819 p=605 u=mistral | Tuesday 02 October 2018 10:41:02 -0400 (0:00:00.045) 0:01:33.092 ******* >2018-10-02 10:41:02,840 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:02,863 p=605 u=mistral | TASK [Run deployment ComputeHostsDeployment] *********************************** >2018-10-02 10:41:02,863 p=605 u=mistral | Tuesday 02 October 2018 10:41:02 -0400 (0:00:00.044) 0:01:33.136 ******* >2018-10-02 10:41:03,596 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/731e5100-2d94-45cb-9be3-79a4bd5ea06d.notify.json)", "delta": "0:00:00.490699", "end": "2018-10-02 10:41:03.535460", "rc": 0, "start": "2018-10-02 10:41:03.044761", "stderr": "[2018-10-02 10:41:03,072] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/731e5100-2d94-45cb-9be3-79a4bd5ea06d.json\n[2018-10-02 10:41:03,125] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:03,125] (heat-config) [DEBUG] [2018-10-02 10:41:03,095] (heat-config) [INFO] hosts=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6\n[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-amyr7rsgchj7-0-v2cp56c24ocr/199091d3-6f10-4e6b-b874-0a507907d8fd\n[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:41:03,096] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/731e5100-2d94-45cb-9be3-79a4bd5ea06d\n[2018-10-02 10:41:03,121] (heat-config) [INFO] \n[2018-10-02 10:41:03,121] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /compute-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-10-02 10:41:03,122] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/731e5100-2d94-45cb-9be3-79a4bd5ea06d\n\n[2018-10-02 10:41:03,125] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:41:03,126] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/731e5100-2d94-45cb-9be3-79a4bd5ea06d.json < /var/lib/heat-config/deployed/731e5100-2d94-45cb-9be3-79a4bd5ea06d.notify.json\n[2018-10-02 10:41:03,528] (heat-config) [INFO] \n[2018-10-02 10:41:03,529] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:03,072] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/731e5100-2d94-45cb-9be3-79a4bd5ea06d.json", "[2018-10-02 10:41:03,125] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:03,125] (heat-config) [DEBUG] [2018-10-02 10:41:03,095] (heat-config) [INFO] hosts=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", "[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-amyr7rsgchj7-0-v2cp56c24ocr/199091d3-6f10-4e6b-b874-0a507907d8fd", "[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:41:03,096] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/731e5100-2d94-45cb-9be3-79a4bd5ea06d", "[2018-10-02 10:41:03,121] (heat-config) [INFO] ", "[2018-10-02 10:41:03,121] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /compute-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-10-02 10:41:03,122] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/731e5100-2d94-45cb-9be3-79a4bd5ea06d", "", "[2018-10-02 10:41:03,125] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:41:03,126] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/731e5100-2d94-45cb-9be3-79a4bd5ea06d.json < /var/lib/heat-config/deployed/731e5100-2d94-45cb-9be3-79a4bd5ea06d.notify.json", "[2018-10-02 10:41:03,528] (heat-config) [INFO] ", "[2018-10-02 10:41:03,529] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:03,641 p=605 u=mistral | TASK [Output for ComputeHostsDeployment] *************************************** >2018-10-02 10:41:03,641 p=605 u=mistral | Tuesday 02 October 2018 10:41:03 -0400 (0:00:00.777) 0:01:33.914 ******* >2018-10-02 10:41:03,733 p=605 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:03,072] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/731e5100-2d94-45cb-9be3-79a4bd5ea06d.json", > "[2018-10-02 10:41:03,125] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:03,125] (heat-config) [DEBUG] [2018-10-02 10:41:03,095] (heat-config) [INFO] hosts=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", > "[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-amyr7rsgchj7-0-v2cp56c24ocr/199091d3-6f10-4e6b-b874-0a507907d8fd", > "[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:41:03,095] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:41:03,096] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/731e5100-2d94-45cb-9be3-79a4bd5ea06d", > "[2018-10-02 10:41:03,121] (heat-config) [INFO] ", > "[2018-10-02 10:41:03,121] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-10-02 10:41:03,122] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/731e5100-2d94-45cb-9be3-79a4bd5ea06d", > "", > "[2018-10-02 10:41:03,125] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:41:03,126] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/731e5100-2d94-45cb-9be3-79a4bd5ea06d.json < /var/lib/heat-config/deployed/731e5100-2d94-45cb-9be3-79a4bd5ea06d.notify.json", > "[2018-10-02 10:41:03,528] (heat-config) [INFO] ", > "[2018-10-02 10:41:03,529] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:03,776 p=605 u=mistral | TASK [Check-mode for Run deployment ComputeHostsDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:03,776 p=605 u=mistral | Tuesday 02 October 2018 10:41:03 -0400 (0:00:00.135) 0:01:34.050 ******* >2018-10-02 10:41:03,794 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:03,816 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:03,816 p=605 u=mistral | Tuesday 02 October 2018 10:41:03 -0400 (0:00:00.039) 0:01:34.089 ******* >2018-10-02 10:41:03,986 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "19db6efd-e5a9-4f69-aa48-92661488c1a6"}, "changed": false} >2018-10-02 10:41:04,011 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:04,011 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.195) 0:01:34.285 ******* >2018-10-02 10:41:04,169 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 10:41:04,191 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:04,191 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.179) 0:01:34.464 ******* >2018-10-02 10:41:04,213 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:04,233 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:04,233 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.041) 0:01:34.506 ******* >2018-10-02 10:41:04,256 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:04,279 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:04,279 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.045) 0:01:34.552 ******* >2018-10-02 10:41:04,300 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:04,324 p=605 u=mistral | TASK [Render deployment file for ComputeAllNodesDeployment for check-mode] ***** >2018-10-02 10:41:04,324 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.045) 0:01:34.598 ******* >2018-10-02 10:41:04,345 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:04,368 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:04,369 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.044) 0:01:34.642 ******* >2018-10-02 10:41:04,388 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:04,411 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:04,411 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.042) 0:01:34.685 ******* >2018-10-02 10:41:04,433 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:04,459 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:04,459 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.047) 0:01:34.732 ******* >2018-10-02 10:41:04,481 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:04,503 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:04,503 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.043) 0:01:34.776 ******* >2018-10-02 10:41:04,526 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:41:04,547 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:04,548 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.044) 0:01:34.821 ******* >2018-10-02 10:41:04,569 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:04,591 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:04,592 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.044) 0:01:34.865 ******* >2018-10-02 10:41:04,621 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:41:04,644 p=605 u=mistral | TASK [Render deployment file for ComputeAllNodesDeployment] ******************** >2018-10-02 10:41:04,645 p=605 u=mistral | Tuesday 02 October 2018 10:41:04 -0400 (0:00:00.052) 0:01:34.918 ******* >2018-10-02 10:41:05,304 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "362ba24e8d4ed5012ce44d769a2cb20728cc56fe", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesDeployment-19db6efd-e5a9-4f69-aa48-92661488c1a6", "gid": 0, "group": "root", "md5sum": "63e60fa585d3a74d46b78492214df8e4", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19532, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491264.83-54713710506665/source", "state": "file", "uid": 0} >2018-10-02 10:41:05,330 p=605 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesDeployment] ************* >2018-10-02 10:41:05,330 p=605 u=mistral | Tuesday 02 October 2018 10:41:05 -0400 (0:00:00.685) 0:01:35.603 ******* >2018-10-02 10:41:05,525 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:05,549 p=605 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesDeployment] ************** >2018-10-02 10:41:05,549 p=605 u=mistral | Tuesday 02 October 2018 10:41:05 -0400 (0:00:00.219) 0:01:35.823 ******* >2018-10-02 10:41:05,570 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:05,594 p=605 u=mistral | TASK [Remove deployed file for ComputeAllNodesDeployment when previous deployment failed] *** >2018-10-02 10:41:05,594 p=605 u=mistral | Tuesday 02 October 2018 10:41:05 -0400 (0:00:00.044) 0:01:35.867 ******* >2018-10-02 10:41:05,616 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:05,640 p=605 u=mistral | TASK [Force remove deployed file for ComputeAllNodesDeployment] **************** >2018-10-02 10:41:05,640 p=605 u=mistral | Tuesday 02 October 2018 10:41:05 -0400 (0:00:00.046) 0:01:35.914 ******* >2018-10-02 10:41:05,660 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:05,683 p=605 u=mistral | TASK [Run deployment ComputeAllNodesDeployment] ******************************** >2018-10-02 10:41:05,684 p=605 u=mistral | Tuesday 02 October 2018 10:41:05 -0400 (0:00:00.043) 0:01:35.957 ******* >2018-10-02 10:41:06,417 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/19db6efd-e5a9-4f69-aa48-92661488c1a6.notify.json)", "delta": "0:00:00.536259", "end": "2018-10-02 10:41:06.393444", "rc": 0, "start": "2018-10-02 10:41:05.857185", "stderr": "[2018-10-02 10:41:05,883] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/19db6efd-e5a9-4f69-aa48-92661488c1a6.json\n[2018-10-02 10:41:06,006] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:06,007] (heat-config) [DEBUG] \n[2018-10-02 10:41:06,007] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 10:41:06,007] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/19db6efd-e5a9-4f69-aa48-92661488c1a6.json < /var/lib/heat-config/deployed/19db6efd-e5a9-4f69-aa48-92661488c1a6.notify.json\n[2018-10-02 10:41:06,387] (heat-config) [INFO] \n[2018-10-02 10:41:06,387] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:05,883] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/19db6efd-e5a9-4f69-aa48-92661488c1a6.json", "[2018-10-02 10:41:06,006] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:06,007] (heat-config) [DEBUG] ", "[2018-10-02 10:41:06,007] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 10:41:06,007] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/19db6efd-e5a9-4f69-aa48-92661488c1a6.json < /var/lib/heat-config/deployed/19db6efd-e5a9-4f69-aa48-92661488c1a6.notify.json", "[2018-10-02 10:41:06,387] (heat-config) [INFO] ", "[2018-10-02 10:41:06,387] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:06,440 p=605 u=mistral | TASK [Output for ComputeAllNodesDeployment] ************************************ >2018-10-02 10:41:06,440 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.756) 0:01:36.714 ******* >2018-10-02 10:41:06,495 p=605 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:05,883] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/19db6efd-e5a9-4f69-aa48-92661488c1a6.json", > "[2018-10-02 10:41:06,006] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:06,007] (heat-config) [DEBUG] ", > "[2018-10-02 10:41:06,007] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 10:41:06,007] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/19db6efd-e5a9-4f69-aa48-92661488c1a6.json < /var/lib/heat-config/deployed/19db6efd-e5a9-4f69-aa48-92661488c1a6.notify.json", > "[2018-10-02 10:41:06,387] (heat-config) [INFO] ", > "[2018-10-02 10:41:06,387] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:06,517 p=605 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:06,517 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.076) 0:01:36.790 ******* >2018-10-02 10:41:06,532 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:06,553 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:06,553 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.036) 0:01:36.826 ******* >2018-10-02 10:41:06,619 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "897ec47a-4609-474e-b411-edee7ad408eb"}, "changed": false} >2018-10-02 10:41:06,637 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:06,637 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.083) 0:01:36.910 ******* >2018-10-02 10:41:06,701 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:41:06,721 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:06,721 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.084) 0:01:36.994 ******* >2018-10-02 10:41:06,739 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:06,757 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:06,757 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.036) 0:01:37.030 ******* >2018-10-02 10:41:06,776 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:06,797 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:06,797 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.039) 0:01:37.070 ******* >2018-10-02 10:41:06,815 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:06,837 p=605 u=mistral | TASK [Render deployment file for ComputeAllNodesValidationDeployment for check-mode] *** >2018-10-02 10:41:06,838 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.040) 0:01:37.111 ******* >2018-10-02 10:41:06,855 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:06,875 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:06,875 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.037) 0:01:37.149 ******* >2018-10-02 10:41:06,892 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:06,910 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:06,910 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.034) 0:01:37.184 ******* >2018-10-02 10:41:06,927 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:06,947 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:06,947 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.036) 0:01:37.220 ******* >2018-10-02 10:41:06,969 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:06,990 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:06,990 p=605 u=mistral | Tuesday 02 October 2018 10:41:06 -0400 (0:00:00.042) 0:01:37.263 ******* >2018-10-02 10:41:07,014 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:41:07,035 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:07,035 p=605 u=mistral | Tuesday 02 October 2018 10:41:07 -0400 (0:00:00.044) 0:01:37.308 ******* >2018-10-02 10:41:07,053 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:07,077 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:07,077 p=605 u=mistral | Tuesday 02 October 2018 10:41:07 -0400 (0:00:00.041) 0:01:37.350 ******* >2018-10-02 10:41:07,104 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:41:07,132 p=605 u=mistral | TASK [Render deployment file for ComputeAllNodesValidationDeployment] ********** >2018-10-02 10:41:07,133 p=605 u=mistral | Tuesday 02 October 2018 10:41:07 -0400 (0:00:00.055) 0:01:37.406 ******* >2018-10-02 10:41:07,734 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "41239edb210e2a2c3bc5f194f23ff4ba1fd6d894", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesValidationDeployment-897ec47a-4609-474e-b411-edee7ad408eb", "gid": 0, "group": "root", "md5sum": "7ebbf5c0dbd1fb636e7c1aa64554d56a", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4935, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491267.28-216623751785511/source", "state": "file", "uid": 0} >2018-10-02 10:41:07,758 p=605 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesValidationDeployment] *** >2018-10-02 10:41:07,758 p=605 u=mistral | Tuesday 02 October 2018 10:41:07 -0400 (0:00:00.625) 0:01:38.032 ******* >2018-10-02 10:41:08,032 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:08,054 p=605 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesValidationDeployment] **** >2018-10-02 10:41:08,054 p=605 u=mistral | Tuesday 02 October 2018 10:41:08 -0400 (0:00:00.295) 0:01:38.327 ******* >2018-10-02 10:41:08,073 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:08,096 p=605 u=mistral | TASK [Remove deployed file for ComputeAllNodesValidationDeployment when previous deployment failed] *** >2018-10-02 10:41:08,096 p=605 u=mistral | Tuesday 02 October 2018 10:41:08 -0400 (0:00:00.042) 0:01:38.369 ******* >2018-10-02 10:41:08,117 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:08,138 p=605 u=mistral | TASK [Force remove deployed file for ComputeAllNodesValidationDeployment] ****** >2018-10-02 10:41:08,138 p=605 u=mistral | Tuesday 02 October 2018 10:41:08 -0400 (0:00:00.042) 0:01:38.412 ******* >2018-10-02 10:41:08,157 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:08,180 p=605 u=mistral | TASK [Run deployment ComputeAllNodesValidationDeployment] ********************** >2018-10-02 10:41:08,180 p=605 u=mistral | Tuesday 02 October 2018 10:41:08 -0400 (0:00:00.041) 0:01:38.454 ******* >2018-10-02 10:41:09,511 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/897ec47a-4609-474e-b411-edee7ad408eb.notify.json)", "delta": "0:00:01.055291", "end": "2018-10-02 10:41:09.485974", "rc": 0, "start": "2018-10-02 10:41:08.430683", "stderr": "[2018-10-02 10:41:08,458] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/897ec47a-4609-474e-b411-edee7ad408eb.json\n[2018-10-02 10:41:09,048] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.14 for local network 172.17.1.0/24.\\nPing to 172.17.1.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\\nPing to 172.17.3.25 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:09,048] (heat-config) [DEBUG] [2018-10-02 10:41:08,481] (heat-config) [INFO] ping_test_ips=172.17.3.25 172.17.4.22 172.17.1.14 172.17.2.12 10.0.0.123 192.168.24.12\n[2018-10-02 10:41:08,481] (heat-config) [INFO] validate_fqdn=False\n[2018-10-02 10:41:08,481] (heat-config) [INFO] validate_ntp=True\n[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6\n[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-l2fuygubfbrx-0-meoph7qhftvp/54c90828-4c6b-4abf-8370-a538156ebd55\n[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:41:08,481] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/897ec47a-4609-474e-b411-edee7ad408eb\n[2018-10-02 10:41:09,043] (heat-config) [INFO] Trying to ping 172.17.1.14 for local network 172.17.1.0/24.\nPing to 172.17.1.14 succeeded.\nSUCCESS\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\nPing to 172.17.2.12 succeeded.\nSUCCESS\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\nPing to 172.17.3.25 succeeded.\nSUCCESS\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\nPing to 192.168.24.12 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nSUCCESS\n\n[2018-10-02 10:41:09,044] (heat-config) [DEBUG] \n[2018-10-02 10:41:09,044] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/897ec47a-4609-474e-b411-edee7ad408eb\n\n[2018-10-02 10:41:09,048] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:41:09,048] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/897ec47a-4609-474e-b411-edee7ad408eb.json < /var/lib/heat-config/deployed/897ec47a-4609-474e-b411-edee7ad408eb.notify.json\n[2018-10-02 10:41:09,480] (heat-config) [INFO] \n[2018-10-02 10:41:09,480] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:08,458] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/897ec47a-4609-474e-b411-edee7ad408eb.json", "[2018-10-02 10:41:09,048] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.14 for local network 172.17.1.0/24.\\nPing to 172.17.1.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\\nPing to 172.17.3.25 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:09,048] (heat-config) [DEBUG] [2018-10-02 10:41:08,481] (heat-config) [INFO] ping_test_ips=172.17.3.25 172.17.4.22 172.17.1.14 172.17.2.12 10.0.0.123 192.168.24.12", "[2018-10-02 10:41:08,481] (heat-config) [INFO] validate_fqdn=False", "[2018-10-02 10:41:08,481] (heat-config) [INFO] validate_ntp=True", "[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", "[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-l2fuygubfbrx-0-meoph7qhftvp/54c90828-4c6b-4abf-8370-a538156ebd55", "[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:41:08,481] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/897ec47a-4609-474e-b411-edee7ad408eb", "[2018-10-02 10:41:09,043] (heat-config) [INFO] Trying to ping 172.17.1.14 for local network 172.17.1.0/24.", "Ping to 172.17.1.14 succeeded.", "SUCCESS", "Trying to ping 172.17.2.12 for local network 172.17.2.0/24.", "Ping to 172.17.2.12 succeeded.", "SUCCESS", "Trying to ping 172.17.3.25 for local network 172.17.3.0/24.", "Ping to 172.17.3.25 succeeded.", "SUCCESS", "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", "Ping to 192.168.24.12 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "SUCCESS", "", "[2018-10-02 10:41:09,044] (heat-config) [DEBUG] ", "[2018-10-02 10:41:09,044] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/897ec47a-4609-474e-b411-edee7ad408eb", "", "[2018-10-02 10:41:09,048] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:41:09,048] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/897ec47a-4609-474e-b411-edee7ad408eb.json < /var/lib/heat-config/deployed/897ec47a-4609-474e-b411-edee7ad408eb.notify.json", "[2018-10-02 10:41:09,480] (heat-config) [INFO] ", "[2018-10-02 10:41:09,480] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:09,535 p=605 u=mistral | TASK [Output for ComputeAllNodesValidationDeployment] ************************** >2018-10-02 10:41:09,536 p=605 u=mistral | Tuesday 02 October 2018 10:41:09 -0400 (0:00:01.355) 0:01:39.809 ******* >2018-10-02 10:41:09,594 p=605 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:08,458] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/897ec47a-4609-474e-b411-edee7ad408eb.json", > "[2018-10-02 10:41:09,048] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.14 for local network 172.17.1.0/24.\\nPing to 172.17.1.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\\nPing to 172.17.3.25 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:09,048] (heat-config) [DEBUG] [2018-10-02 10:41:08,481] (heat-config) [INFO] ping_test_ips=172.17.3.25 172.17.4.22 172.17.1.14 172.17.2.12 10.0.0.123 192.168.24.12", > "[2018-10-02 10:41:08,481] (heat-config) [INFO] validate_fqdn=False", > "[2018-10-02 10:41:08,481] (heat-config) [INFO] validate_ntp=True", > "[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", > "[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-l2fuygubfbrx-0-meoph7qhftvp/54c90828-4c6b-4abf-8370-a538156ebd55", > "[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:41:08,481] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:41:08,481] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/897ec47a-4609-474e-b411-edee7ad408eb", > "[2018-10-02 10:41:09,043] (heat-config) [INFO] Trying to ping 172.17.1.14 for local network 172.17.1.0/24.", > "Ping to 172.17.1.14 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.12 for local network 172.17.2.0/24.", > "Ping to 172.17.2.12 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.25 for local network 172.17.3.0/24.", > "Ping to 172.17.3.25 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", > "Ping to 192.168.24.12 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "SUCCESS", > "", > "[2018-10-02 10:41:09,044] (heat-config) [DEBUG] ", > "[2018-10-02 10:41:09,044] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/897ec47a-4609-474e-b411-edee7ad408eb", > "", > "[2018-10-02 10:41:09,048] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:41:09,048] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/897ec47a-4609-474e-b411-edee7ad408eb.json < /var/lib/heat-config/deployed/897ec47a-4609-474e-b411-edee7ad408eb.notify.json", > "[2018-10-02 10:41:09,480] (heat-config) [INFO] ", > "[2018-10-02 10:41:09,480] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:09,620 p=605 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesValidationDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:09,620 p=605 u=mistral | Tuesday 02 October 2018 10:41:09 -0400 (0:00:00.084) 0:01:39.893 ******* >2018-10-02 10:41:09,636 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:09,657 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:09,658 p=605 u=mistral | Tuesday 02 October 2018 10:41:09 -0400 (0:00:00.037) 0:01:39.931 ******* >2018-10-02 10:41:09,742 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "ca904bd1-4bc6-4eac-af30-b68d1ff53967"}, "changed": false} >2018-10-02 10:41:09,765 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:09,765 p=605 u=mistral | Tuesday 02 October 2018 10:41:09 -0400 (0:00:00.107) 0:01:40.039 ******* >2018-10-02 10:41:09,844 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "ansible"}, "changed": false} >2018-10-02 10:41:09,864 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:09,865 p=605 u=mistral | Tuesday 02 October 2018 10:41:09 -0400 (0:00:00.099) 0:01:40.138 ******* >2018-10-02 10:41:09,883 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:09,902 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:09,902 p=605 u=mistral | Tuesday 02 October 2018 10:41:09 -0400 (0:00:00.037) 0:01:40.176 ******* >2018-10-02 10:41:09,921 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:09,941 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:09,941 p=605 u=mistral | Tuesday 02 October 2018 10:41:09 -0400 (0:00:00.038) 0:01:40.214 ******* >2018-10-02 10:41:09,960 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:09,980 p=605 u=mistral | TASK [Render deployment file for ComputeHostPrepDeployment for check-mode] ***** >2018-10-02 10:41:09,981 p=605 u=mistral | Tuesday 02 October 2018 10:41:09 -0400 (0:00:00.039) 0:01:40.254 ******* >2018-10-02 10:41:09,998 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:10,017 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:10,017 p=605 u=mistral | Tuesday 02 October 2018 10:41:10 -0400 (0:00:00.036) 0:01:40.290 ******* >2018-10-02 10:41:10,035 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:10,053 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:10,054 p=605 u=mistral | Tuesday 02 October 2018 10:41:10 -0400 (0:00:00.036) 0:01:40.327 ******* >2018-10-02 10:41:10,072 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:10,091 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:10,091 p=605 u=mistral | Tuesday 02 October 2018 10:41:10 -0400 (0:00:00.037) 0:01:40.364 ******* >2018-10-02 10:41:10,111 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:10,130 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:10,130 p=605 u=mistral | Tuesday 02 October 2018 10:41:10 -0400 (0:00:00.039) 0:01:40.403 ******* >2018-10-02 10:41:10,150 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:41:10,168 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:10,168 p=605 u=mistral | Tuesday 02 October 2018 10:41:10 -0400 (0:00:00.038) 0:01:40.442 ******* >2018-10-02 10:41:10,186 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:10,205 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:10,205 p=605 u=mistral | Tuesday 02 October 2018 10:41:10 -0400 (0:00:00.036) 0:01:40.479 ******* >2018-10-02 10:41:10,228 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:41:10,251 p=605 u=mistral | TASK [Render deployment file for ComputeHostPrepDeployment] ******************** >2018-10-02 10:41:10,251 p=605 u=mistral | Tuesday 02 October 2018 10:41:10 -0400 (0:00:00.045) 0:01:40.524 ******* >2018-10-02 10:41:10,788 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "0916342a6974d94b10ba09a1c15eade2576b9c1d", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostPrepDeployment-ca904bd1-4bc6-4eac-af30-b68d1ff53967", "gid": 0, "group": "root", "md5sum": "64ea97043d371cfc9f72734e3397bc5c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21372, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491270.33-36854248471618/source", "state": "file", "uid": 0} >2018-10-02 10:41:10,811 p=605 u=mistral | TASK [Check if deployed file exists for ComputeHostPrepDeployment] ************* >2018-10-02 10:41:10,811 p=605 u=mistral | Tuesday 02 October 2018 10:41:10 -0400 (0:00:00.559) 0:01:41.084 ******* >2018-10-02 10:41:11,003 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:11,025 p=605 u=mistral | TASK [Check previous deployment rc for ComputeHostPrepDeployment] ************** >2018-10-02 10:41:11,026 p=605 u=mistral | Tuesday 02 October 2018 10:41:11 -0400 (0:00:00.214) 0:01:41.299 ******* >2018-10-02 10:41:11,043 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:11,065 p=605 u=mistral | TASK [Remove deployed file for ComputeHostPrepDeployment when previous deployment failed] *** >2018-10-02 10:41:11,065 p=605 u=mistral | Tuesday 02 October 2018 10:41:11 -0400 (0:00:00.039) 0:01:41.338 ******* >2018-10-02 10:41:11,086 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:11,109 p=605 u=mistral | TASK [Force remove deployed file for ComputeHostPrepDeployment] **************** >2018-10-02 10:41:11,110 p=605 u=mistral | Tuesday 02 October 2018 10:41:11 -0400 (0:00:00.044) 0:01:41.383 ******* >2018-10-02 10:41:11,128 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:11,153 p=605 u=mistral | TASK [Run deployment ComputeHostPrepDeployment] ******************************** >2018-10-02 10:41:11,153 p=605 u=mistral | Tuesday 02 October 2018 10:41:11 -0400 (0:00:00.043) 0:01:41.426 ******* >2018-10-02 10:41:17,677 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ca904bd1-4bc6-4eac-af30-b68d1ff53967.notify.json)", "delta": "0:00:06.316624", "end": "2018-10-02 10:41:17.647783", "rc": 0, "start": "2018-10-02 10:41:11.331159", "stderr": "[2018-10-02 10:41:11,357] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/ca904bd1-4bc6-4eac-af30-b68d1ff53967.json\n[2018-10-02 10:41:17,238] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:17,238] (heat-config) [DEBUG] [2018-10-02 10:41:11,381] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/ca904bd1-4bc6-4eac-af30-b68d1ff53967_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/ca904bd1-4bc6-4eac-af30-b68d1ff53967_variables.json\n[2018-10-02 10:41:17,233] (heat-config) [INFO] Return code 0\n[2018-10-02 10:41:17,233] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-10-02 10:41:17,234] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/ca904bd1-4bc6-4eac-af30-b68d1ff53967_playbook.yaml\n\n[2018-10-02 10:41:17,238] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-10-02 10:41:17,238] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ca904bd1-4bc6-4eac-af30-b68d1ff53967.json < /var/lib/heat-config/deployed/ca904bd1-4bc6-4eac-af30-b68d1ff53967.notify.json\n[2018-10-02 10:41:17,641] (heat-config) [INFO] \n[2018-10-02 10:41:17,641] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:11,357] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/ca904bd1-4bc6-4eac-af30-b68d1ff53967.json", "[2018-10-02 10:41:17,238] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:17,238] (heat-config) [DEBUG] [2018-10-02 10:41:11,381] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/ca904bd1-4bc6-4eac-af30-b68d1ff53967_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/ca904bd1-4bc6-4eac-af30-b68d1ff53967_variables.json", "[2018-10-02 10:41:17,233] (heat-config) [INFO] Return code 0", "[2018-10-02 10:41:17,233] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-10-02 10:41:17,234] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/ca904bd1-4bc6-4eac-af30-b68d1ff53967_playbook.yaml", "", "[2018-10-02 10:41:17,238] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-10-02 10:41:17,238] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ca904bd1-4bc6-4eac-af30-b68d1ff53967.json < /var/lib/heat-config/deployed/ca904bd1-4bc6-4eac-af30-b68d1ff53967.notify.json", "[2018-10-02 10:41:17,641] (heat-config) [INFO] ", "[2018-10-02 10:41:17,641] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:17,704 p=605 u=mistral | TASK [Output for ComputeHostPrepDeployment] ************************************ >2018-10-02 10:41:17,704 p=605 u=mistral | Tuesday 02 October 2018 10:41:17 -0400 (0:00:06.551) 0:01:47.978 ******* >2018-10-02 10:41:17,768 p=605 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:11,357] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/ca904bd1-4bc6-4eac-af30-b68d1ff53967.json", > "[2018-10-02 10:41:17,238] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:17,238] (heat-config) [DEBUG] [2018-10-02 10:41:11,381] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/ca904bd1-4bc6-4eac-af30-b68d1ff53967_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/ca904bd1-4bc6-4eac-af30-b68d1ff53967_variables.json", > "[2018-10-02 10:41:17,233] (heat-config) [INFO] Return code 0", > "[2018-10-02 10:41:17,233] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-10-02 10:41:17,234] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/ca904bd1-4bc6-4eac-af30-b68d1ff53967_playbook.yaml", > "", > "[2018-10-02 10:41:17,238] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-10-02 10:41:17,238] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ca904bd1-4bc6-4eac-af30-b68d1ff53967.json < /var/lib/heat-config/deployed/ca904bd1-4bc6-4eac-af30-b68d1ff53967.notify.json", > "[2018-10-02 10:41:17,641] (heat-config) [INFO] ", > "[2018-10-02 10:41:17,641] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:17,794 p=605 u=mistral | TASK [Check-mode for Run deployment ComputeHostPrepDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:17,794 p=605 u=mistral | Tuesday 02 October 2018 10:41:17 -0400 (0:00:00.089) 0:01:48.068 ******* >2018-10-02 10:41:17,812 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:17,835 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:17,835 p=605 u=mistral | Tuesday 02 October 2018 10:41:17 -0400 (0:00:00.041) 0:01:48.109 ******* >2018-10-02 10:41:17,897 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "a0b69563-8fee-402a-9843-7568da55382d"}, "changed": false} >2018-10-02 10:41:17,920 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:17,920 p=605 u=mistral | Tuesday 02 October 2018 10:41:17 -0400 (0:00:00.084) 0:01:48.193 ******* >2018-10-02 10:41:17,984 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:41:18,003 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:18,003 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.082) 0:01:48.276 ******* >2018-10-02 10:41:18,023 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:18,041 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:18,041 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.038) 0:01:48.314 ******* >2018-10-02 10:41:18,059 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:18,079 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:18,079 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.038) 0:01:48.353 ******* >2018-10-02 10:41:18,097 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:18,118 p=605 u=mistral | TASK [Render deployment file for ComputeArtifactsDeploy for check-mode] ******** >2018-10-02 10:41:18,118 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.038) 0:01:48.391 ******* >2018-10-02 10:41:18,136 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:18,156 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:18,156 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.038) 0:01:48.430 ******* >2018-10-02 10:41:18,174 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:18,193 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:18,193 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.036) 0:01:48.467 ******* >2018-10-02 10:41:18,210 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:18,230 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:18,230 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.036) 0:01:48.503 ******* >2018-10-02 10:41:18,250 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:18,270 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:18,270 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.040) 0:01:48.543 ******* >2018-10-02 10:41:18,296 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:41:18,314 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:18,315 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.044) 0:01:48.588 ******* >2018-10-02 10:41:18,334 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:18,354 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:18,355 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.039) 0:01:48.628 ******* >2018-10-02 10:41:18,372 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:41:18,394 p=605 u=mistral | TASK [Render deployment file for ComputeArtifactsDeploy] *********************** >2018-10-02 10:41:18,394 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.039) 0:01:48.667 ******* >2018-10-02 10:41:18,898 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "eaa80bbfde113316b561d2281199fceed4d5c703", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeArtifactsDeploy-a0b69563-8fee-402a-9843-7568da55382d", "gid": 0, "group": "root", "md5sum": "9660055dac1fd96520a67c5dca9ea470", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2015, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491278.45-196229606696061/source", "state": "file", "uid": 0} >2018-10-02 10:41:18,921 p=605 u=mistral | TASK [Check if deployed file exists for ComputeArtifactsDeploy] **************** >2018-10-02 10:41:18,921 p=605 u=mistral | Tuesday 02 October 2018 10:41:18 -0400 (0:00:00.526) 0:01:49.194 ******* >2018-10-02 10:41:19,109 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:19,133 p=605 u=mistral | TASK [Check previous deployment rc for ComputeArtifactsDeploy] ***************** >2018-10-02 10:41:19,133 p=605 u=mistral | Tuesday 02 October 2018 10:41:19 -0400 (0:00:00.212) 0:01:49.406 ******* >2018-10-02 10:41:19,153 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:19,175 p=605 u=mistral | TASK [Remove deployed file for ComputeArtifactsDeploy when previous deployment failed] *** >2018-10-02 10:41:19,176 p=605 u=mistral | Tuesday 02 October 2018 10:41:19 -0400 (0:00:00.042) 0:01:49.449 ******* >2018-10-02 10:41:19,201 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:19,225 p=605 u=mistral | TASK [Force remove deployed file for ComputeArtifactsDeploy] ******************* >2018-10-02 10:41:19,225 p=605 u=mistral | Tuesday 02 October 2018 10:41:19 -0400 (0:00:00.049) 0:01:49.498 ******* >2018-10-02 10:41:19,245 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:19,267 p=605 u=mistral | TASK [Run deployment ComputeArtifactsDeploy] *********************************** >2018-10-02 10:41:19,267 p=605 u=mistral | Tuesday 02 October 2018 10:41:19 -0400 (0:00:00.042) 0:01:49.540 ******* >2018-10-02 10:41:19,926 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a0b69563-8fee-402a-9843-7568da55382d.notify.json)", "delta": "0:00:00.461698", "end": "2018-10-02 10:41:19.901582", "rc": 0, "start": "2018-10-02 10:41:19.439884", "stderr": "[2018-10-02 10:41:19,465] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a0b69563-8fee-402a-9843-7568da55382d.json\n[2018-10-02 10:41:19,496] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:19,496] (heat-config) [DEBUG] [2018-10-02 10:41:19,487] (heat-config) [INFO] artifact_urls=\n[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6\n[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-kxw7gj7kfige-ComputeArtifactsDeploy-6ihpc7hrrnv2-0-mumxtgdo7scq/3b6efaca-6e11-4b2a-b116-64910d8fecee\n[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:41:19,487] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a0b69563-8fee-402a-9843-7568da55382d\n[2018-10-02 10:41:19,492] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-10-02 10:41:19,492] (heat-config) [DEBUG] \n[2018-10-02 10:41:19,492] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a0b69563-8fee-402a-9843-7568da55382d\n\n[2018-10-02 10:41:19,496] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:41:19,496] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a0b69563-8fee-402a-9843-7568da55382d.json < /var/lib/heat-config/deployed/a0b69563-8fee-402a-9843-7568da55382d.notify.json\n[2018-10-02 10:41:19,895] (heat-config) [INFO] \n[2018-10-02 10:41:19,896] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:19,465] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a0b69563-8fee-402a-9843-7568da55382d.json", "[2018-10-02 10:41:19,496] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:19,496] (heat-config) [DEBUG] [2018-10-02 10:41:19,487] (heat-config) [INFO] artifact_urls=", "[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", "[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-kxw7gj7kfige-ComputeArtifactsDeploy-6ihpc7hrrnv2-0-mumxtgdo7scq/3b6efaca-6e11-4b2a-b116-64910d8fecee", "[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:41:19,487] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a0b69563-8fee-402a-9843-7568da55382d", "[2018-10-02 10:41:19,492] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-10-02 10:41:19,492] (heat-config) [DEBUG] ", "[2018-10-02 10:41:19,492] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a0b69563-8fee-402a-9843-7568da55382d", "", "[2018-10-02 10:41:19,496] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:41:19,496] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a0b69563-8fee-402a-9843-7568da55382d.json < /var/lib/heat-config/deployed/a0b69563-8fee-402a-9843-7568da55382d.notify.json", "[2018-10-02 10:41:19,895] (heat-config) [INFO] ", "[2018-10-02 10:41:19,896] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:19,951 p=605 u=mistral | TASK [Output for ComputeArtifactsDeploy] *************************************** >2018-10-02 10:41:19,951 p=605 u=mistral | Tuesday 02 October 2018 10:41:19 -0400 (0:00:00.683) 0:01:50.224 ******* >2018-10-02 10:41:20,005 p=605 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:19,465] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a0b69563-8fee-402a-9843-7568da55382d.json", > "[2018-10-02 10:41:19,496] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:19,496] (heat-config) [DEBUG] [2018-10-02 10:41:19,487] (heat-config) [INFO] artifact_urls=", > "[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_server_id=b6a0ceb7-5a15-4be9-a5fc-8134b83a17e6", > "[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-kxw7gj7kfige-ComputeArtifactsDeploy-6ihpc7hrrnv2-0-mumxtgdo7scq/3b6efaca-6e11-4b2a-b116-64910d8fecee", > "[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:41:19,487] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:41:19,487] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a0b69563-8fee-402a-9843-7568da55382d", > "[2018-10-02 10:41:19,492] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-10-02 10:41:19,492] (heat-config) [DEBUG] ", > "[2018-10-02 10:41:19,492] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a0b69563-8fee-402a-9843-7568da55382d", > "", > "[2018-10-02 10:41:19,496] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:41:19,496] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a0b69563-8fee-402a-9843-7568da55382d.json < /var/lib/heat-config/deployed/a0b69563-8fee-402a-9843-7568da55382d.notify.json", > "[2018-10-02 10:41:19,895] (heat-config) [INFO] ", > "[2018-10-02 10:41:19,896] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:20,028 p=605 u=mistral | TASK [Check-mode for Run deployment ComputeArtifactsDeploy (changed status indicates deployment would run)] *** >2018-10-02 10:41:20,028 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.077) 0:01:50.302 ******* >2018-10-02 10:41:20,045 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:20,069 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:20,070 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.041) 0:01:50.343 ******* >2018-10-02 10:41:20,128 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "1c228d13-9c6a-4faa-9b88-f302f8416901"}, "changed": false} >2018-10-02 10:41:20,151 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:20,151 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.081) 0:01:50.425 ******* >2018-10-02 10:41:20,208 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:41:20,234 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:20,234 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.082) 0:01:50.507 ******* >2018-10-02 10:41:20,252 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:20,275 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:20,275 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.041) 0:01:50.548 ******* >2018-10-02 10:41:20,294 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:20,316 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:20,316 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.041) 0:01:50.590 ******* >2018-10-02 10:41:20,333 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:20,357 p=605 u=mistral | TASK [Render deployment file for CephStorageUpgradeInitDeployment for check-mode] *** >2018-10-02 10:41:20,358 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.041) 0:01:50.631 ******* >2018-10-02 10:41:20,374 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:20,397 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:20,397 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.039) 0:01:50.670 ******* >2018-10-02 10:41:20,414 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:20,437 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:20,437 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.039) 0:01:50.710 ******* >2018-10-02 10:41:20,454 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:20,476 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:20,476 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.039) 0:01:50.749 ******* >2018-10-02 10:41:20,496 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:20,518 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:20,518 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.041) 0:01:50.791 ******* >2018-10-02 10:41:20,545 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:20,571 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:20,572 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.053) 0:01:50.845 ******* >2018-10-02 10:41:20,591 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:20,615 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:20,615 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.043) 0:01:50.889 ******* >2018-10-02 10:41:20,635 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:20,662 p=605 u=mistral | TASK [Render deployment file for CephStorageUpgradeInitDeployment] ************* >2018-10-02 10:41:20,663 p=605 u=mistral | Tuesday 02 October 2018 10:41:20 -0400 (0:00:00.047) 0:01:50.936 ******* >2018-10-02 10:41:21,238 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "02fe6cd09b3164cd9b3efaaf76ea33c0884a0c3d", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageUpgradeInitDeployment-1c228d13-9c6a-4faa-9b88-f302f8416901", "gid": 0, "group": "root", "md5sum": "428170595ddae9428c0fe2b6ec87e3e2", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1186, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491280.8-30148508374745/source", "state": "file", "uid": 0} >2018-10-02 10:41:21,263 p=605 u=mistral | TASK [Check if deployed file exists for CephStorageUpgradeInitDeployment] ****** >2018-10-02 10:41:21,264 p=605 u=mistral | Tuesday 02 October 2018 10:41:21 -0400 (0:00:00.601) 0:01:51.537 ******* >2018-10-02 10:41:21,525 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:21,552 p=605 u=mistral | TASK [Check previous deployment rc for CephStorageUpgradeInitDeployment] ******* >2018-10-02 10:41:21,552 p=605 u=mistral | Tuesday 02 October 2018 10:41:21 -0400 (0:00:00.288) 0:01:51.825 ******* >2018-10-02 10:41:21,573 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:21,601 p=605 u=mistral | TASK [Remove deployed file for CephStorageUpgradeInitDeployment when previous deployment failed] *** >2018-10-02 10:41:21,601 p=605 u=mistral | Tuesday 02 October 2018 10:41:21 -0400 (0:00:00.049) 0:01:51.874 ******* >2018-10-02 10:41:21,698 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:21,772 p=605 u=mistral | TASK [Force remove deployed file for CephStorageUpgradeInitDeployment] ********* >2018-10-02 10:41:21,772 p=605 u=mistral | Tuesday 02 October 2018 10:41:21 -0400 (0:00:00.171) 0:01:52.046 ******* >2018-10-02 10:41:21,790 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:21,815 p=605 u=mistral | TASK [Run deployment CephStorageUpgradeInitDeployment] ************************* >2018-10-02 10:41:21,815 p=605 u=mistral | Tuesday 02 October 2018 10:41:21 -0400 (0:00:00.042) 0:01:52.088 ******* >2018-10-02 10:41:22,418 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/1c228d13-9c6a-4faa-9b88-f302f8416901.notify.json)", "delta": "0:00:00.420653", "end": "2018-10-02 10:41:22.388542", "rc": 0, "start": "2018-10-02 10:41:21.967889", "stderr": "[2018-10-02 10:41:21,989] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1c228d13-9c6a-4faa-9b88-f302f8416901.json\n[2018-10-02 10:41:22,014] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:22,014] (heat-config) [DEBUG] [2018-10-02 10:41:22,008] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f\n[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-r7iuindp3fim-0-fcdemalfee52-CephStorageUpgradeInitDeployment-yoej2vkr7vbw/12ad2e01-6acb-4174-892e-3f450a1d8663\n[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:41:22,009] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1c228d13-9c6a-4faa-9b88-f302f8416901\n[2018-10-02 10:41:22,011] (heat-config) [INFO] \n[2018-10-02 10:41:22,011] (heat-config) [DEBUG] \n[2018-10-02 10:41:22,011] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1c228d13-9c6a-4faa-9b88-f302f8416901\n\n[2018-10-02 10:41:22,014] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:41:22,014] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1c228d13-9c6a-4faa-9b88-f302f8416901.json < /var/lib/heat-config/deployed/1c228d13-9c6a-4faa-9b88-f302f8416901.notify.json\n[2018-10-02 10:41:22,381] (heat-config) [INFO] \n[2018-10-02 10:41:22,381] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:21,989] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1c228d13-9c6a-4faa-9b88-f302f8416901.json", "[2018-10-02 10:41:22,014] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:22,014] (heat-config) [DEBUG] [2018-10-02 10:41:22,008] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", "[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-r7iuindp3fim-0-fcdemalfee52-CephStorageUpgradeInitDeployment-yoej2vkr7vbw/12ad2e01-6acb-4174-892e-3f450a1d8663", "[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:41:22,009] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1c228d13-9c6a-4faa-9b88-f302f8416901", "[2018-10-02 10:41:22,011] (heat-config) [INFO] ", "[2018-10-02 10:41:22,011] (heat-config) [DEBUG] ", "[2018-10-02 10:41:22,011] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1c228d13-9c6a-4faa-9b88-f302f8416901", "", "[2018-10-02 10:41:22,014] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:41:22,014] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1c228d13-9c6a-4faa-9b88-f302f8416901.json < /var/lib/heat-config/deployed/1c228d13-9c6a-4faa-9b88-f302f8416901.notify.json", "[2018-10-02 10:41:22,381] (heat-config) [INFO] ", "[2018-10-02 10:41:22,381] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:22,446 p=605 u=mistral | TASK [Output for CephStorageUpgradeInitDeployment] ***************************** >2018-10-02 10:41:22,446 p=605 u=mistral | Tuesday 02 October 2018 10:41:22 -0400 (0:00:00.630) 0:01:52.719 ******* >2018-10-02 10:41:22,503 p=605 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:21,989] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1c228d13-9c6a-4faa-9b88-f302f8416901.json", > "[2018-10-02 10:41:22,014] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:22,014] (heat-config) [DEBUG] [2018-10-02 10:41:22,008] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", > "[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-r7iuindp3fim-0-fcdemalfee52-CephStorageUpgradeInitDeployment-yoej2vkr7vbw/12ad2e01-6acb-4174-892e-3f450a1d8663", > "[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:41:22,009] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:41:22,009] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1c228d13-9c6a-4faa-9b88-f302f8416901", > "[2018-10-02 10:41:22,011] (heat-config) [INFO] ", > "[2018-10-02 10:41:22,011] (heat-config) [DEBUG] ", > "[2018-10-02 10:41:22,011] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1c228d13-9c6a-4faa-9b88-f302f8416901", > "", > "[2018-10-02 10:41:22,014] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:41:22,014] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1c228d13-9c6a-4faa-9b88-f302f8416901.json < /var/lib/heat-config/deployed/1c228d13-9c6a-4faa-9b88-f302f8416901.notify.json", > "[2018-10-02 10:41:22,381] (heat-config) [INFO] ", > "[2018-10-02 10:41:22,381] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:22,529 p=605 u=mistral | TASK [Check-mode for Run deployment CephStorageUpgradeInitDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:22,529 p=605 u=mistral | Tuesday 02 October 2018 10:41:22 -0400 (0:00:00.083) 0:01:52.802 ******* >2018-10-02 10:41:22,544 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:22,568 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:22,568 p=605 u=mistral | Tuesday 02 October 2018 10:41:22 -0400 (0:00:00.039) 0:01:52.842 ******* >2018-10-02 10:41:22,671 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "48f49f4d-209e-4f92-b98a-9f1e35a2c1c9"}, "changed": false} >2018-10-02 10:41:22,697 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:22,697 p=605 u=mistral | Tuesday 02 October 2018 10:41:22 -0400 (0:00:00.129) 0:01:52.971 ******* >2018-10-02 10:41:22,797 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 10:41:22,822 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:22,822 p=605 u=mistral | Tuesday 02 October 2018 10:41:22 -0400 (0:00:00.124) 0:01:53.096 ******* >2018-10-02 10:41:22,844 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:22,867 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:22,868 p=605 u=mistral | Tuesday 02 October 2018 10:41:22 -0400 (0:00:00.045) 0:01:53.141 ******* >2018-10-02 10:41:22,887 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:22,911 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:22,911 p=605 u=mistral | Tuesday 02 October 2018 10:41:22 -0400 (0:00:00.043) 0:01:53.184 ******* >2018-10-02 10:41:22,930 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:22,955 p=605 u=mistral | TASK [Render deployment file for CephStorageDeployment for check-mode] ********* >2018-10-02 10:41:22,955 p=605 u=mistral | Tuesday 02 October 2018 10:41:22 -0400 (0:00:00.043) 0:01:53.228 ******* >2018-10-02 10:41:22,973 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:22,997 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:22,997 p=605 u=mistral | Tuesday 02 October 2018 10:41:22 -0400 (0:00:00.042) 0:01:53.270 ******* >2018-10-02 10:41:23,016 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:23,039 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:23,039 p=605 u=mistral | Tuesday 02 October 2018 10:41:23 -0400 (0:00:00.042) 0:01:53.312 ******* >2018-10-02 10:41:23,057 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:23,080 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:23,081 p=605 u=mistral | Tuesday 02 October 2018 10:41:23 -0400 (0:00:00.041) 0:01:53.354 ******* >2018-10-02 10:41:23,109 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:23,137 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:23,137 p=605 u=mistral | Tuesday 02 October 2018 10:41:23 -0400 (0:00:00.056) 0:01:53.410 ******* >2018-10-02 10:41:23,161 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:23,185 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:23,185 p=605 u=mistral | Tuesday 02 October 2018 10:41:23 -0400 (0:00:00.048) 0:01:53.458 ******* >2018-10-02 10:41:23,203 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:23,227 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:23,227 p=605 u=mistral | Tuesday 02 October 2018 10:41:23 -0400 (0:00:00.041) 0:01:53.500 ******* >2018-10-02 10:41:23,247 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:23,272 p=605 u=mistral | TASK [Render deployment file for CephStorageDeployment] ************************ >2018-10-02 10:41:23,273 p=605 u=mistral | Tuesday 02 October 2018 10:41:23 -0400 (0:00:00.045) 0:01:53.546 ******* >2018-10-02 10:41:23,824 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "2e4dd86ce9ec8b6002b23ab7d6185f1b23d80b1c", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageDeployment-48f49f4d-209e-4f92-b98a-9f1e35a2c1c9", "gid": 0, "group": "root", "md5sum": "1ada41785f380ccb9bb2359e230a30ef", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9081, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491283.38-936146758817/source", "state": "file", "uid": 0} >2018-10-02 10:41:23,849 p=605 u=mistral | TASK [Check if deployed file exists for CephStorageDeployment] ***************** >2018-10-02 10:41:23,849 p=605 u=mistral | Tuesday 02 October 2018 10:41:23 -0400 (0:00:00.576) 0:01:54.122 ******* >2018-10-02 10:41:24,040 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:24,067 p=605 u=mistral | TASK [Check previous deployment rc for CephStorageDeployment] ****************** >2018-10-02 10:41:24,067 p=605 u=mistral | Tuesday 02 October 2018 10:41:24 -0400 (0:00:00.218) 0:01:54.341 ******* >2018-10-02 10:41:24,091 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:24,118 p=605 u=mistral | TASK [Remove deployed file for CephStorageDeployment when previous deployment failed] *** >2018-10-02 10:41:24,119 p=605 u=mistral | Tuesday 02 October 2018 10:41:24 -0400 (0:00:00.051) 0:01:54.392 ******* >2018-10-02 10:41:24,141 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:24,167 p=605 u=mistral | TASK [Force remove deployed file for CephStorageDeployment] ******************** >2018-10-02 10:41:24,167 p=605 u=mistral | Tuesday 02 October 2018 10:41:24 -0400 (0:00:00.048) 0:01:54.440 ******* >2018-10-02 10:41:24,187 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:24,214 p=605 u=mistral | TASK [Run deployment CephStorageDeployment] ************************************ >2018-10-02 10:41:24,214 p=605 u=mistral | Tuesday 02 October 2018 10:41:24 -0400 (0:00:00.046) 0:01:54.487 ******* >2018-10-02 10:41:24,965 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/48f49f4d-209e-4f92-b98a-9f1e35a2c1c9.notify.json)", "delta": "0:00:00.554610", "end": "2018-10-02 10:41:24.938016", "rc": 0, "start": "2018-10-02 10:41:24.383406", "stderr": "[2018-10-02 10:41:24,409] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/48f49f4d-209e-4f92-b98a-9f1e35a2c1c9.json\n[2018-10-02 10:41:24,535] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:24,535] (heat-config) [DEBUG] \n[2018-10-02 10:41:24,536] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 10:41:24,536] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/48f49f4d-209e-4f92-b98a-9f1e35a2c1c9.json < /var/lib/heat-config/deployed/48f49f4d-209e-4f92-b98a-9f1e35a2c1c9.notify.json\n[2018-10-02 10:41:24,932] (heat-config) [INFO] \n[2018-10-02 10:41:24,932] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:24,409] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/48f49f4d-209e-4f92-b98a-9f1e35a2c1c9.json", "[2018-10-02 10:41:24,535] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:24,535] (heat-config) [DEBUG] ", "[2018-10-02 10:41:24,536] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 10:41:24,536] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/48f49f4d-209e-4f92-b98a-9f1e35a2c1c9.json < /var/lib/heat-config/deployed/48f49f4d-209e-4f92-b98a-9f1e35a2c1c9.notify.json", "[2018-10-02 10:41:24,932] (heat-config) [INFO] ", "[2018-10-02 10:41:24,932] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:24,991 p=605 u=mistral | TASK [Output for CephStorageDeployment] **************************************** >2018-10-02 10:41:24,991 p=605 u=mistral | Tuesday 02 October 2018 10:41:24 -0400 (0:00:00.777) 0:01:55.264 ******* >2018-10-02 10:41:25,050 p=605 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:24,409] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/48f49f4d-209e-4f92-b98a-9f1e35a2c1c9.json", > "[2018-10-02 10:41:24,535] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:24,535] (heat-config) [DEBUG] ", > "[2018-10-02 10:41:24,536] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 10:41:24,536] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/48f49f4d-209e-4f92-b98a-9f1e35a2c1c9.json < /var/lib/heat-config/deployed/48f49f4d-209e-4f92-b98a-9f1e35a2c1c9.notify.json", > "[2018-10-02 10:41:24,932] (heat-config) [INFO] ", > "[2018-10-02 10:41:24,932] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:25,076 p=605 u=mistral | TASK [Check-mode for Run deployment CephStorageDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:25,076 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.084) 0:01:55.349 ******* >2018-10-02 10:41:25,096 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:25,122 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:25,122 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.045) 0:01:55.395 ******* >2018-10-02 10:41:25,189 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "ea998e44-ab1f-480d-b531-4ff2b8210742"}, "changed": false} >2018-10-02 10:41:25,216 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:25,216 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.094) 0:01:55.489 ******* >2018-10-02 10:41:25,280 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:41:25,313 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:25,313 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.097) 0:01:55.586 ******* >2018-10-02 10:41:25,334 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:25,359 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:25,359 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.046) 0:01:55.633 ******* >2018-10-02 10:41:25,379 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:25,405 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:25,405 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.045) 0:01:55.678 ******* >2018-10-02 10:41:25,425 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:25,452 p=605 u=mistral | TASK [Render deployment file for CephStorageHostsDeployment for check-mode] **** >2018-10-02 10:41:25,452 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.047) 0:01:55.725 ******* >2018-10-02 10:41:25,470 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:25,495 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:25,495 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.042) 0:01:55.768 ******* >2018-10-02 10:41:25,513 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:25,540 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:25,541 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.045) 0:01:55.814 ******* >2018-10-02 10:41:25,560 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:25,585 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:25,585 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.044) 0:01:55.859 ******* >2018-10-02 10:41:25,608 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:25,631 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:25,631 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.045) 0:01:55.905 ******* >2018-10-02 10:41:25,655 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:25,678 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:25,678 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.046) 0:01:55.951 ******* >2018-10-02 10:41:25,698 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:25,720 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:25,721 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.042) 0:01:55.994 ******* >2018-10-02 10:41:25,741 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:25,765 p=605 u=mistral | TASK [Render deployment file for CephStorageHostsDeployment] ******************* >2018-10-02 10:41:25,765 p=605 u=mistral | Tuesday 02 October 2018 10:41:25 -0400 (0:00:00.044) 0:01:56.039 ******* >2018-10-02 10:41:26,252 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "e7c9d50edce9647254b598968037403e7e15d4af", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostsDeployment-ea998e44-ab1f-480d-b531-4ff2b8210742", "gid": 0, "group": "root", "md5sum": "36c8109ba7fed65453409c83fd09da91", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4431, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491285.83-158486168318954/source", "state": "file", "uid": 0} >2018-10-02 10:41:26,279 p=605 u=mistral | TASK [Check if deployed file exists for CephStorageHostsDeployment] ************ >2018-10-02 10:41:26,280 p=605 u=mistral | Tuesday 02 October 2018 10:41:26 -0400 (0:00:00.514) 0:01:56.553 ******* >2018-10-02 10:41:26,471 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:26,494 p=605 u=mistral | TASK [Check previous deployment rc for CephStorageHostsDeployment] ************* >2018-10-02 10:41:26,494 p=605 u=mistral | Tuesday 02 October 2018 10:41:26 -0400 (0:00:00.214) 0:01:56.767 ******* >2018-10-02 10:41:26,513 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:26,535 p=605 u=mistral | TASK [Remove deployed file for CephStorageHostsDeployment when previous deployment failed] *** >2018-10-02 10:41:26,535 p=605 u=mistral | Tuesday 02 October 2018 10:41:26 -0400 (0:00:00.040) 0:01:56.808 ******* >2018-10-02 10:41:26,557 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:26,581 p=605 u=mistral | TASK [Force remove deployed file for CephStorageHostsDeployment] *************** >2018-10-02 10:41:26,582 p=605 u=mistral | Tuesday 02 October 2018 10:41:26 -0400 (0:00:00.046) 0:01:56.855 ******* >2018-10-02 10:41:26,602 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:26,627 p=605 u=mistral | TASK [Run deployment CephStorageHostsDeployment] ******************************* >2018-10-02 10:41:26,627 p=605 u=mistral | Tuesday 02 October 2018 10:41:26 -0400 (0:00:00.045) 0:01:56.900 ******* >2018-10-02 10:41:27,297 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ea998e44-ab1f-480d-b531-4ff2b8210742.notify.json)", "delta": "0:00:00.446203", "end": "2018-10-02 10:41:27.238504", "rc": 0, "start": "2018-10-02 10:41:26.792301", "stderr": "[2018-10-02 10:41:26,815] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ea998e44-ab1f-480d-b531-4ff2b8210742.json\n[2018-10-02 10:41:26,860] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:26,860] (heat-config) [DEBUG] [2018-10-02 10:41:26,835] (heat-config) [INFO] hosts=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f\n[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-kgeyzalaxlif-0-xsrrb2jne2zj/bfed1444-ecd2-48d4-9895-a6d1764138fb\n[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:41:26,836] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ea998e44-ab1f-480d-b531-4ff2b8210742\n[2018-10-02 10:41:26,857] (heat-config) [INFO] \n[2018-10-02 10:41:26,857] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /ceph-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.10 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.14 controller-0.localdomain controller-0\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.123 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.28 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.10 compute-0.external.localdomain compute-0.external\n192.168.24.10 compute-0.management.localdomain compute-0.management\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.32 ceph-0.localdomain ceph-0\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-10-02 10:41:26,857] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ea998e44-ab1f-480d-b531-4ff2b8210742\n\n[2018-10-02 10:41:26,860] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:41:26,861] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ea998e44-ab1f-480d-b531-4ff2b8210742.json < /var/lib/heat-config/deployed/ea998e44-ab1f-480d-b531-4ff2b8210742.notify.json\n[2018-10-02 10:41:27,232] (heat-config) [INFO] \n[2018-10-02 10:41:27,232] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:26,815] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ea998e44-ab1f-480d-b531-4ff2b8210742.json", "[2018-10-02 10:41:26,860] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:26,860] (heat-config) [DEBUG] [2018-10-02 10:41:26,835] (heat-config) [INFO] hosts=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", "[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-kgeyzalaxlif-0-xsrrb2jne2zj/bfed1444-ecd2-48d4-9895-a6d1764138fb", "[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:41:26,836] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ea998e44-ab1f-480d-b531-4ff2b8210742", "[2018-10-02 10:41:26,857] (heat-config) [INFO] ", "[2018-10-02 10:41:26,857] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /ceph-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.10 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.14 controller-0.localdomain controller-0", "172.17.3.25 controller-0.storage.localdomain controller-0.storage", "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.123 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.28 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.10 compute-0.external.localdomain compute-0.external", "192.168.24.10 compute-0.management.localdomain compute-0.management", "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.32 ceph-0.localdomain ceph-0", "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-10-02 10:41:26,857] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ea998e44-ab1f-480d-b531-4ff2b8210742", "", "[2018-10-02 10:41:26,860] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:41:26,861] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ea998e44-ab1f-480d-b531-4ff2b8210742.json < /var/lib/heat-config/deployed/ea998e44-ab1f-480d-b531-4ff2b8210742.notify.json", "[2018-10-02 10:41:27,232] (heat-config) [INFO] ", "[2018-10-02 10:41:27,232] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:27,342 p=605 u=mistral | TASK [Output for CephStorageHostsDeployment] *********************************** >2018-10-02 10:41:27,342 p=605 u=mistral | Tuesday 02 October 2018 10:41:27 -0400 (0:00:00.714) 0:01:57.615 ******* >2018-10-02 10:41:27,431 p=605 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:26,815] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ea998e44-ab1f-480d-b531-4ff2b8210742.json", > "[2018-10-02 10:41:26,860] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.8 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.10 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.14 controller-0.localdomain controller-0\\n172.17.3.25 controller-0.storage.localdomain controller-0.storage\\n172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.123 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.28 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.20 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.10 compute-0.external.localdomain compute-0.external\\n192.168.24.10 compute-0.management.localdomain compute-0.management\\n192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.32 ceph-0.localdomain ceph-0\\n172.17.3.32 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:26,860] (heat-config) [DEBUG] [2018-10-02 10:41:26,835] (heat-config) [INFO] hosts=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", > "[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-kgeyzalaxlif-0-xsrrb2jne2zj/bfed1444-ecd2-48d4-9895-a6d1764138fb", > "[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:41:26,835] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:41:26,836] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ea998e44-ab1f-480d-b531-4ff2b8210742", > "[2018-10-02 10:41:26,857] (heat-config) [INFO] ", > "[2018-10-02 10:41:26,857] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.8 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.10 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.14 controller-0.localdomain controller-0", > "172.17.3.25 controller-0.storage.localdomain controller-0.storage", > "172.17.4.22 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.14 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.123 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.28 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.10 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.28 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.20 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.10 compute-0.external.localdomain compute-0.external", > "192.168.24.10 compute-0.management.localdomain compute-0.management", > "192.168.24.10 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.32 ceph-0.localdomain ceph-0", > "172.17.3.32 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-10-02 10:41:26,857] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ea998e44-ab1f-480d-b531-4ff2b8210742", > "", > "[2018-10-02 10:41:26,860] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:41:26,861] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ea998e44-ab1f-480d-b531-4ff2b8210742.json < /var/lib/heat-config/deployed/ea998e44-ab1f-480d-b531-4ff2b8210742.notify.json", > "[2018-10-02 10:41:27,232] (heat-config) [INFO] ", > "[2018-10-02 10:41:27,232] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:27,472 p=605 u=mistral | TASK [Check-mode for Run deployment CephStorageHostsDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:27,472 p=605 u=mistral | Tuesday 02 October 2018 10:41:27 -0400 (0:00:00.130) 0:01:57.745 ******* >2018-10-02 10:41:27,490 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:27,514 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:27,514 p=605 u=mistral | Tuesday 02 October 2018 10:41:27 -0400 (0:00:00.042) 0:01:57.788 ******* >2018-10-02 10:41:27,682 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "ce8feada-867d-4c61-9160-f290de9afa40"}, "changed": false} >2018-10-02 10:41:27,706 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:27,706 p=605 u=mistral | Tuesday 02 October 2018 10:41:27 -0400 (0:00:00.191) 0:01:57.979 ******* >2018-10-02 10:41:27,874 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "hiera"}, "changed": false} >2018-10-02 10:41:27,900 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:27,900 p=605 u=mistral | Tuesday 02 October 2018 10:41:27 -0400 (0:00:00.193) 0:01:58.173 ******* >2018-10-02 10:41:27,920 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:27,943 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:27,943 p=605 u=mistral | Tuesday 02 October 2018 10:41:27 -0400 (0:00:00.043) 0:01:58.216 ******* >2018-10-02 10:41:27,962 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:27,985 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:27,986 p=605 u=mistral | Tuesday 02 October 2018 10:41:27 -0400 (0:00:00.042) 0:01:58.259 ******* >2018-10-02 10:41:28,005 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:28,030 p=605 u=mistral | TASK [Render deployment file for CephStorageAllNodesDeployment for check-mode] *** >2018-10-02 10:41:28,030 p=605 u=mistral | Tuesday 02 October 2018 10:41:28 -0400 (0:00:00.044) 0:01:58.304 ******* >2018-10-02 10:41:28,050 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:28,074 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:28,074 p=605 u=mistral | Tuesday 02 October 2018 10:41:28 -0400 (0:00:00.043) 0:01:58.347 ******* >2018-10-02 10:41:28,094 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:28,118 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:28,119 p=605 u=mistral | Tuesday 02 October 2018 10:41:28 -0400 (0:00:00.044) 0:01:58.392 ******* >2018-10-02 10:41:28,138 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:28,162 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:28,162 p=605 u=mistral | Tuesday 02 October 2018 10:41:28 -0400 (0:00:00.043) 0:01:58.435 ******* >2018-10-02 10:41:28,187 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:28,211 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:28,212 p=605 u=mistral | Tuesday 02 October 2018 10:41:28 -0400 (0:00:00.049) 0:01:58.485 ******* >2018-10-02 10:41:28,241 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:28,270 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:28,270 p=605 u=mistral | Tuesday 02 October 2018 10:41:28 -0400 (0:00:00.058) 0:01:58.544 ******* >2018-10-02 10:41:28,291 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:28,315 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:28,315 p=605 u=mistral | Tuesday 02 October 2018 10:41:28 -0400 (0:00:00.044) 0:01:58.589 ******* >2018-10-02 10:41:28,337 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:28,363 p=605 u=mistral | TASK [Render deployment file for CephStorageAllNodesDeployment] **************** >2018-10-02 10:41:28,363 p=605 u=mistral | Tuesday 02 October 2018 10:41:28 -0400 (0:00:00.047) 0:01:58.636 ******* >2018-10-02 10:41:29,034 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "588687c4a7b5e92641e71252d31368cda48337da", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesDeployment-ce8feada-867d-4c61-9160-f290de9afa40", "gid": 0, "group": "root", "md5sum": "3c04ec768ba0d0e9cef6fa3206bd0db0", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19532, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491288.62-80473476590837/source", "state": "file", "uid": 0} >2018-10-02 10:41:29,057 p=605 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesDeployment] ********* >2018-10-02 10:41:29,057 p=605 u=mistral | Tuesday 02 October 2018 10:41:29 -0400 (0:00:00.693) 0:01:59.330 ******* >2018-10-02 10:41:29,240 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:29,265 p=605 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesDeployment] ********** >2018-10-02 10:41:29,265 p=605 u=mistral | Tuesday 02 October 2018 10:41:29 -0400 (0:00:00.208) 0:01:59.538 ******* >2018-10-02 10:41:29,287 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:29,312 p=605 u=mistral | TASK [Remove deployed file for CephStorageAllNodesDeployment when previous deployment failed] *** >2018-10-02 10:41:29,312 p=605 u=mistral | Tuesday 02 October 2018 10:41:29 -0400 (0:00:00.046) 0:01:59.585 ******* >2018-10-02 10:41:29,333 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:29,355 p=605 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesDeployment] ************ >2018-10-02 10:41:29,355 p=605 u=mistral | Tuesday 02 October 2018 10:41:29 -0400 (0:00:00.043) 0:01:59.628 ******* >2018-10-02 10:41:29,373 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:29,395 p=605 u=mistral | TASK [Run deployment CephStorageAllNodesDeployment] **************************** >2018-10-02 10:41:29,395 p=605 u=mistral | Tuesday 02 October 2018 10:41:29 -0400 (0:00:00.040) 0:01:59.669 ******* >2018-10-02 10:41:30,234 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ce8feada-867d-4c61-9160-f290de9afa40.notify.json)", "delta": "0:00:00.572116", "end": "2018-10-02 10:41:30.205735", "rc": 0, "start": "2018-10-02 10:41:29.633619", "stderr": "[2018-10-02 10:41:29,660] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ce8feada-867d-4c61-9160-f290de9afa40.json\n[2018-10-02 10:41:29,788] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:29,788] (heat-config) [DEBUG] \n[2018-10-02 10:41:29,788] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-10-02 10:41:29,789] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ce8feada-867d-4c61-9160-f290de9afa40.json < /var/lib/heat-config/deployed/ce8feada-867d-4c61-9160-f290de9afa40.notify.json\n[2018-10-02 10:41:30,199] (heat-config) [INFO] \n[2018-10-02 10:41:30,199] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:29,660] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ce8feada-867d-4c61-9160-f290de9afa40.json", "[2018-10-02 10:41:29,788] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:29,788] (heat-config) [DEBUG] ", "[2018-10-02 10:41:29,788] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-10-02 10:41:29,789] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ce8feada-867d-4c61-9160-f290de9afa40.json < /var/lib/heat-config/deployed/ce8feada-867d-4c61-9160-f290de9afa40.notify.json", "[2018-10-02 10:41:30,199] (heat-config) [INFO] ", "[2018-10-02 10:41:30,199] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:30,260 p=605 u=mistral | TASK [Output for CephStorageAllNodesDeployment] ******************************** >2018-10-02 10:41:30,261 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.865) 0:02:00.534 ******* >2018-10-02 10:41:30,391 p=605 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:29,660] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ce8feada-867d-4c61-9160-f290de9afa40.json", > "[2018-10-02 10:41:29,788] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:29,788] (heat-config) [DEBUG] ", > "[2018-10-02 10:41:29,788] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-10-02 10:41:29,789] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ce8feada-867d-4c61-9160-f290de9afa40.json < /var/lib/heat-config/deployed/ce8feada-867d-4c61-9160-f290de9afa40.notify.json", > "[2018-10-02 10:41:30,199] (heat-config) [INFO] ", > "[2018-10-02 10:41:30,199] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:30,419 p=605 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:30,420 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.158) 0:02:00.693 ******* >2018-10-02 10:41:30,437 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:30,514 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:30,514 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.094) 0:02:00.787 ******* >2018-10-02 10:41:30,586 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "d22d62cd-bc81-4db2-81f0-84a004895013"}, "changed": false} >2018-10-02 10:41:30,612 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:30,612 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.098) 0:02:00.886 ******* >2018-10-02 10:41:30,687 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:41:30,713 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:30,714 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.101) 0:02:00.987 ******* >2018-10-02 10:41:30,732 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:30,756 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:30,756 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.042) 0:02:01.029 ******* >2018-10-02 10:41:30,776 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:30,801 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:30,801 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.044) 0:02:01.074 ******* >2018-10-02 10:41:30,821 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:30,845 p=605 u=mistral | TASK [Render deployment file for CephStorageAllNodesValidationDeployment for check-mode] *** >2018-10-02 10:41:30,846 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.044) 0:02:01.119 ******* >2018-10-02 10:41:30,863 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:30,886 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:30,887 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.040) 0:02:01.160 ******* >2018-10-02 10:41:30,905 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:30,926 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:30,926 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.039) 0:02:01.200 ******* >2018-10-02 10:41:30,951 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:30,979 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:30,979 p=605 u=mistral | Tuesday 02 October 2018 10:41:30 -0400 (0:00:00.052) 0:02:01.253 ******* >2018-10-02 10:41:31,003 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:31,027 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:31,027 p=605 u=mistral | Tuesday 02 October 2018 10:41:31 -0400 (0:00:00.047) 0:02:01.300 ******* >2018-10-02 10:41:31,051 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:31,075 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:31,075 p=605 u=mistral | Tuesday 02 October 2018 10:41:31 -0400 (0:00:00.047) 0:02:01.348 ******* >2018-10-02 10:41:31,096 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:31,120 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:31,120 p=605 u=mistral | Tuesday 02 October 2018 10:41:31 -0400 (0:00:00.045) 0:02:01.394 ******* >2018-10-02 10:41:31,139 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:31,163 p=605 u=mistral | TASK [Render deployment file for CephStorageAllNodesValidationDeployment] ****** >2018-10-02 10:41:31,164 p=605 u=mistral | Tuesday 02 October 2018 10:41:31 -0400 (0:00:00.043) 0:02:01.437 ******* >2018-10-02 10:41:31,676 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "01b58b2ad841ac1415df649e6967c11ec24423d8", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesValidationDeployment-d22d62cd-bc81-4db2-81f0-84a004895013", "gid": 0, "group": "root", "md5sum": "2d9cc7a4e318c09e1fca73afa8600210", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4943, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491291.24-61685166413768/source", "state": "file", "uid": 0} >2018-10-02 10:41:31,701 p=605 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesValidationDeployment] *** >2018-10-02 10:41:31,701 p=605 u=mistral | Tuesday 02 October 2018 10:41:31 -0400 (0:00:00.537) 0:02:01.974 ******* >2018-10-02 10:41:31,898 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:31,924 p=605 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesValidationDeployment] *** >2018-10-02 10:41:31,924 p=605 u=mistral | Tuesday 02 October 2018 10:41:31 -0400 (0:00:00.222) 0:02:02.197 ******* >2018-10-02 10:41:31,943 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:31,970 p=605 u=mistral | TASK [Remove deployed file for CephStorageAllNodesValidationDeployment when previous deployment failed] *** >2018-10-02 10:41:31,970 p=605 u=mistral | Tuesday 02 October 2018 10:41:31 -0400 (0:00:00.045) 0:02:02.243 ******* >2018-10-02 10:41:31,992 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:32,018 p=605 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesValidationDeployment] *** >2018-10-02 10:41:32,018 p=605 u=mistral | Tuesday 02 October 2018 10:41:32 -0400 (0:00:00.048) 0:02:02.292 ******* >2018-10-02 10:41:32,037 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:32,063 p=605 u=mistral | TASK [Run deployment CephStorageAllNodesValidationDeployment] ****************** >2018-10-02 10:41:32,063 p=605 u=mistral | Tuesday 02 October 2018 10:41:32 -0400 (0:00:00.044) 0:02:02.336 ******* >2018-10-02 10:41:33,249 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/d22d62cd-bc81-4db2-81f0-84a004895013.notify.json)", "delta": "0:00:00.988338", "end": "2018-10-02 10:41:33.221897", "rc": 0, "start": "2018-10-02 10:41:32.233559", "stderr": "[2018-10-02 10:41:32,258] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d22d62cd-bc81-4db2-81f0-84a004895013.json\n[2018-10-02 10:41:32,853] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.123 for local network 10.0.0.0/24.\\nPing to 10.0.0.123 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\\nPing to 172.17.3.25 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.22 for local network 172.17.4.0/24.\\nPing to 172.17.4.22 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:32,853] (heat-config) [DEBUG] [2018-10-02 10:41:32,280] (heat-config) [INFO] ping_test_ips=172.17.3.25 172.17.4.22 172.17.1.14 172.17.2.12 10.0.0.123 192.168.24.12\n[2018-10-02 10:41:32,280] (heat-config) [INFO] validate_fqdn=False\n[2018-10-02 10:41:32,280] (heat-config) [INFO] validate_ntp=True\n[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f\n[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-ym7byk2vuv5l-0-gdklpw7f4rln/26f62acb-b4d0-4653-9443-f5c6837b5c4e\n[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:41:32,280] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d22d62cd-bc81-4db2-81f0-84a004895013\n[2018-10-02 10:41:32,849] (heat-config) [INFO] Trying to ping 10.0.0.123 for local network 10.0.0.0/24.\nPing to 10.0.0.123 succeeded.\nSUCCESS\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\nPing to 172.17.3.25 succeeded.\nSUCCESS\nTrying to ping 172.17.4.22 for local network 172.17.4.0/24.\nPing to 172.17.4.22 succeeded.\nSUCCESS\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\nPing to 192.168.24.12 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-10-02 10:41:32,850] (heat-config) [DEBUG] \n[2018-10-02 10:41:32,850] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d22d62cd-bc81-4db2-81f0-84a004895013\n\n[2018-10-02 10:41:32,854] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:41:32,854] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d22d62cd-bc81-4db2-81f0-84a004895013.json < /var/lib/heat-config/deployed/d22d62cd-bc81-4db2-81f0-84a004895013.notify.json\n[2018-10-02 10:41:33,216] (heat-config) [INFO] \n[2018-10-02 10:41:33,216] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:32,258] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d22d62cd-bc81-4db2-81f0-84a004895013.json", "[2018-10-02 10:41:32,853] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.123 for local network 10.0.0.0/24.\\nPing to 10.0.0.123 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\\nPing to 172.17.3.25 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.22 for local network 172.17.4.0/24.\\nPing to 172.17.4.22 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:32,853] (heat-config) [DEBUG] [2018-10-02 10:41:32,280] (heat-config) [INFO] ping_test_ips=172.17.3.25 172.17.4.22 172.17.1.14 172.17.2.12 10.0.0.123 192.168.24.12", "[2018-10-02 10:41:32,280] (heat-config) [INFO] validate_fqdn=False", "[2018-10-02 10:41:32,280] (heat-config) [INFO] validate_ntp=True", "[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", "[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-ym7byk2vuv5l-0-gdklpw7f4rln/26f62acb-b4d0-4653-9443-f5c6837b5c4e", "[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:41:32,280] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d22d62cd-bc81-4db2-81f0-84a004895013", "[2018-10-02 10:41:32,849] (heat-config) [INFO] Trying to ping 10.0.0.123 for local network 10.0.0.0/24.", "Ping to 10.0.0.123 succeeded.", "SUCCESS", "Trying to ping 172.17.3.25 for local network 172.17.3.0/24.", "Ping to 172.17.3.25 succeeded.", "SUCCESS", "Trying to ping 172.17.4.22 for local network 172.17.4.0/24.", "Ping to 172.17.4.22 succeeded.", "SUCCESS", "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", "Ping to 192.168.24.12 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-10-02 10:41:32,850] (heat-config) [DEBUG] ", "[2018-10-02 10:41:32,850] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d22d62cd-bc81-4db2-81f0-84a004895013", "", "[2018-10-02 10:41:32,854] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:41:32,854] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d22d62cd-bc81-4db2-81f0-84a004895013.json < /var/lib/heat-config/deployed/d22d62cd-bc81-4db2-81f0-84a004895013.notify.json", "[2018-10-02 10:41:33,216] (heat-config) [INFO] ", "[2018-10-02 10:41:33,216] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:33,274 p=605 u=mistral | TASK [Output for CephStorageAllNodesValidationDeployment] ********************** >2018-10-02 10:41:33,274 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:01.210) 0:02:03.547 ******* >2018-10-02 10:41:33,326 p=605 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:32,258] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d22d62cd-bc81-4db2-81f0-84a004895013.json", > "[2018-10-02 10:41:32,853] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.123 for local network 10.0.0.0/24.\\nPing to 10.0.0.123 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.25 for local network 172.17.3.0/24.\\nPing to 172.17.3.25 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.22 for local network 172.17.4.0/24.\\nPing to 172.17.4.22 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:32,853] (heat-config) [DEBUG] [2018-10-02 10:41:32,280] (heat-config) [INFO] ping_test_ips=172.17.3.25 172.17.4.22 172.17.1.14 172.17.2.12 10.0.0.123 192.168.24.12", > "[2018-10-02 10:41:32,280] (heat-config) [INFO] validate_fqdn=False", > "[2018-10-02 10:41:32,280] (heat-config) [INFO] validate_ntp=True", > "[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", > "[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-ym7byk2vuv5l-0-gdklpw7f4rln/26f62acb-b4d0-4653-9443-f5c6837b5c4e", > "[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:41:32,280] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:41:32,280] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d22d62cd-bc81-4db2-81f0-84a004895013", > "[2018-10-02 10:41:32,849] (heat-config) [INFO] Trying to ping 10.0.0.123 for local network 10.0.0.0/24.", > "Ping to 10.0.0.123 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.25 for local network 172.17.3.0/24.", > "Ping to 172.17.3.25 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.22 for local network 172.17.4.0/24.", > "Ping to 172.17.4.22 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", > "Ping to 192.168.24.12 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-10-02 10:41:32,850] (heat-config) [DEBUG] ", > "[2018-10-02 10:41:32,850] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d22d62cd-bc81-4db2-81f0-84a004895013", > "", > "[2018-10-02 10:41:32,854] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:41:32,854] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d22d62cd-bc81-4db2-81f0-84a004895013.json < /var/lib/heat-config/deployed/d22d62cd-bc81-4db2-81f0-84a004895013.notify.json", > "[2018-10-02 10:41:33,216] (heat-config) [INFO] ", > "[2018-10-02 10:41:33,216] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:33,354 p=605 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesValidationDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:33,354 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.080) 0:02:03.627 ******* >2018-10-02 10:41:33,371 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:33,396 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:33,396 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.041) 0:02:03.669 ******* >2018-10-02 10:41:33,479 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "1c6ba6e3-7a70-4cd2-a16d-9b9626675e15"}, "changed": false} >2018-10-02 10:41:33,503 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:33,504 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.107) 0:02:03.777 ******* >2018-10-02 10:41:33,587 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "ansible"}, "changed": false} >2018-10-02 10:41:33,613 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:33,613 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.109) 0:02:03.886 ******* >2018-10-02 10:41:33,632 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:33,657 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:33,657 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.044) 0:02:03.931 ******* >2018-10-02 10:41:33,676 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:33,703 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:33,703 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.046) 0:02:03.977 ******* >2018-10-02 10:41:33,723 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:33,748 p=605 u=mistral | TASK [Render deployment file for CephStorageHostPrepDeployment for check-mode] *** >2018-10-02 10:41:33,748 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.045) 0:02:04.022 ******* >2018-10-02 10:41:33,769 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:33,793 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:33,793 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.044) 0:02:04.066 ******* >2018-10-02 10:41:33,812 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:33,836 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:33,836 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.042) 0:02:04.109 ******* >2018-10-02 10:41:33,856 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:33,880 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:33,880 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.044) 0:02:04.154 ******* >2018-10-02 10:41:33,902 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:33,926 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:33,926 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.045) 0:02:04.199 ******* >2018-10-02 10:41:33,947 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:33,970 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:33,970 p=605 u=mistral | Tuesday 02 October 2018 10:41:33 -0400 (0:00:00.044) 0:02:04.244 ******* >2018-10-02 10:41:33,991 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:34,014 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:34,015 p=605 u=mistral | Tuesday 02 October 2018 10:41:34 -0400 (0:00:00.044) 0:02:04.288 ******* >2018-10-02 10:41:34,038 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:34,066 p=605 u=mistral | TASK [Render deployment file for CephStorageHostPrepDeployment] **************** >2018-10-02 10:41:34,066 p=605 u=mistral | Tuesday 02 October 2018 10:41:34 -0400 (0:00:00.051) 0:02:04.339 ******* >2018-10-02 10:41:34,604 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "5aac528ecb5ecdc608ec7495fe966fc02a0ffb56", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostPrepDeployment-1c6ba6e3-7a70-4cd2-a16d-9b9626675e15", "gid": 0, "group": "root", "md5sum": "e1118426ec93f590e58b2316ddedd489", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21380, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491294.15-71074360044655/source", "state": "file", "uid": 0} >2018-10-02 10:41:34,630 p=605 u=mistral | TASK [Check if deployed file exists for CephStorageHostPrepDeployment] ********* >2018-10-02 10:41:34,631 p=605 u=mistral | Tuesday 02 October 2018 10:41:34 -0400 (0:00:00.564) 0:02:04.904 ******* >2018-10-02 10:41:34,825 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:34,852 p=605 u=mistral | TASK [Check previous deployment rc for CephStorageHostPrepDeployment] ********** >2018-10-02 10:41:34,852 p=605 u=mistral | Tuesday 02 October 2018 10:41:34 -0400 (0:00:00.221) 0:02:05.125 ******* >2018-10-02 10:41:34,872 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:34,899 p=605 u=mistral | TASK [Remove deployed file for CephStorageHostPrepDeployment when previous deployment failed] *** >2018-10-02 10:41:34,899 p=605 u=mistral | Tuesday 02 October 2018 10:41:34 -0400 (0:00:00.047) 0:02:05.173 ******* >2018-10-02 10:41:34,921 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:34,949 p=605 u=mistral | TASK [Force remove deployed file for CephStorageHostPrepDeployment] ************ >2018-10-02 10:41:34,949 p=605 u=mistral | Tuesday 02 October 2018 10:41:34 -0400 (0:00:00.049) 0:02:05.222 ******* >2018-10-02 10:41:34,967 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:34,995 p=605 u=mistral | TASK [Run deployment CephStorageHostPrepDeployment] **************************** >2018-10-02 10:41:34,996 p=605 u=mistral | Tuesday 02 October 2018 10:41:34 -0400 (0:00:00.046) 0:02:05.269 ******* >2018-10-02 10:41:41,391 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15.notify.json)", "delta": "0:00:06.198061", "end": "2018-10-02 10:41:41.361903", "rc": 0, "start": "2018-10-02 10:41:35.163842", "stderr": "[2018-10-02 10:41:35,189] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15.json\n[2018-10-02 10:41:40,967] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:40,968] (heat-config) [DEBUG] [2018-10-02 10:41:35,211] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15_variables.json\n[2018-10-02 10:41:40,964] (heat-config) [INFO] Return code 0\n[2018-10-02 10:41:40,964] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-10-02 10:41:40,964] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15_playbook.yaml\n\n[2018-10-02 10:41:40,968] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-10-02 10:41:40,968] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15.json < /var/lib/heat-config/deployed/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15.notify.json\n[2018-10-02 10:41:41,356] (heat-config) [INFO] \n[2018-10-02 10:41:41,356] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:35,189] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15.json", "[2018-10-02 10:41:40,967] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:40,968] (heat-config) [DEBUG] [2018-10-02 10:41:35,211] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15_variables.json", "[2018-10-02 10:41:40,964] (heat-config) [INFO] Return code 0", "[2018-10-02 10:41:40,964] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-10-02 10:41:40,964] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15_playbook.yaml", "", "[2018-10-02 10:41:40,968] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-10-02 10:41:40,968] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15.json < /var/lib/heat-config/deployed/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15.notify.json", "[2018-10-02 10:41:41,356] (heat-config) [INFO] ", "[2018-10-02 10:41:41,356] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:41,419 p=605 u=mistral | TASK [Output for CephStorageHostPrepDeployment] ******************************** >2018-10-02 10:41:41,419 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:06.423) 0:02:11.693 ******* >2018-10-02 10:41:41,480 p=605 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:35,189] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15.json", > "[2018-10-02 10:41:40,967] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:40,968] (heat-config) [DEBUG] [2018-10-02 10:41:35,211] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15_variables.json", > "[2018-10-02 10:41:40,964] (heat-config) [INFO] Return code 0", > "[2018-10-02 10:41:40,964] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-10-02 10:41:40,964] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15_playbook.yaml", > "", > "[2018-10-02 10:41:40,968] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-10-02 10:41:40,968] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15.json < /var/lib/heat-config/deployed/1c6ba6e3-7a70-4cd2-a16d-9b9626675e15.notify.json", > "[2018-10-02 10:41:41,356] (heat-config) [INFO] ", > "[2018-10-02 10:41:41,356] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:41,508 p=605 u=mistral | TASK [Check-mode for Run deployment CephStorageHostPrepDeployment (changed status indicates deployment would run)] *** >2018-10-02 10:41:41,509 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:00.089) 0:02:11.782 ******* >2018-10-02 10:41:41,524 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:41,549 p=605 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-10-02 10:41:41,549 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:00.040) 0:02:11.823 ******* >2018-10-02 10:41:41,610 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "2fbf8368-5d12-4a56-b8d2-39c89c67484c"}, "changed": false} >2018-10-02 10:41:41,635 p=605 u=mistral | TASK [Lookup deployment group] ************************************************* >2018-10-02 10:41:41,635 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:00.085) 0:02:11.908 ******* >2018-10-02 10:41:41,697 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_group": "script"}, "changed": false} >2018-10-02 10:41:41,722 p=605 u=mistral | TASK [Create hiera check-mode directory] *************************************** >2018-10-02 10:41:41,723 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:00.087) 0:02:11.996 ******* >2018-10-02 10:41:41,744 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:41,768 p=605 u=mistral | TASK [Create deployed check-mode directory] ************************************ >2018-10-02 10:41:41,769 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:00.046) 0:02:12.042 ******* >2018-10-02 10:41:41,788 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:41,812 p=605 u=mistral | TASK [Create tripleo-config-download check-mode directory] ********************* >2018-10-02 10:41:41,812 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:00.043) 0:02:12.085 ******* >2018-10-02 10:41:41,831 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:41,855 p=605 u=mistral | TASK [Render deployment file for CephStorageArtifactsDeploy for check-mode] **** >2018-10-02 10:41:41,855 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:00.043) 0:02:12.128 ******* >2018-10-02 10:41:41,873 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:41,895 p=605 u=mistral | TASK [Run hiera deployment for check mode] ************************************* >2018-10-02 10:41:41,895 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:00.040) 0:02:12.169 ******* >2018-10-02 10:41:41,913 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:41,938 p=605 u=mistral | TASK [List hieradata files for check mode] ************************************* >2018-10-02 10:41:41,938 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:00.042) 0:02:12.211 ******* >2018-10-02 10:41:41,956 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:41,979 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:41,979 p=605 u=mistral | Tuesday 02 October 2018 10:41:41 -0400 (0:00:00.040) 0:02:12.252 ******* >2018-10-02 10:41:42,001 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:42,026 p=605 u=mistral | TASK [diff hieradata changes for check mode] *********************************** >2018-10-02 10:41:42,026 p=605 u=mistral | Tuesday 02 October 2018 10:41:42 -0400 (0:00:00.047) 0:02:12.299 ******* >2018-10-02 10:41:42,047 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:42,071 p=605 u=mistral | TASK [hiera.yaml changes for check mode] *************************************** >2018-10-02 10:41:42,071 p=605 u=mistral | Tuesday 02 October 2018 10:41:42 -0400 (0:00:00.045) 0:02:12.344 ******* >2018-10-02 10:41:42,091 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:42,114 p=605 u=mistral | TASK [diff hiera.yaml changes for check mode] ********************************** >2018-10-02 10:41:42,115 p=605 u=mistral | Tuesday 02 October 2018 10:41:42 -0400 (0:00:00.043) 0:02:12.388 ******* >2018-10-02 10:41:42,134 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:41:42,158 p=605 u=mistral | TASK [Render deployment file for CephStorageArtifactsDeploy] ******************* >2018-10-02 10:41:42,159 p=605 u=mistral | Tuesday 02 October 2018 10:41:42 -0400 (0:00:00.043) 0:02:12.432 ******* >2018-10-02 10:41:42,735 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "5010defc3e9b577869a7297973acb06da94f7e14", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageArtifactsDeploy-2fbf8368-5d12-4a56-b8d2-39c89c67484c", "gid": 0, "group": "root", "md5sum": "18b33d7aca83300a288da2997039f19f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2023, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491302.29-34755653081356/source", "state": "file", "uid": 0} >2018-10-02 10:41:42,765 p=605 u=mistral | TASK [Check if deployed file exists for CephStorageArtifactsDeploy] ************ >2018-10-02 10:41:42,765 p=605 u=mistral | Tuesday 02 October 2018 10:41:42 -0400 (0:00:00.606) 0:02:13.039 ******* >2018-10-02 10:41:43,032 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:41:43,056 p=605 u=mistral | TASK [Check previous deployment rc for CephStorageArtifactsDeploy] ************* >2018-10-02 10:41:43,056 p=605 u=mistral | Tuesday 02 October 2018 10:41:43 -0400 (0:00:00.290) 0:02:13.329 ******* >2018-10-02 10:41:43,078 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:43,103 p=605 u=mistral | TASK [Remove deployed file for CephStorageArtifactsDeploy when previous deployment failed] *** >2018-10-02 10:41:43,103 p=605 u=mistral | Tuesday 02 October 2018 10:41:43 -0400 (0:00:00.047) 0:02:13.377 ******* >2018-10-02 10:41:43,126 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:43,152 p=605 u=mistral | TASK [Force remove deployed file for CephStorageArtifactsDeploy] *************** >2018-10-02 10:41:43,152 p=605 u=mistral | Tuesday 02 October 2018 10:41:43 -0400 (0:00:00.048) 0:02:13.425 ******* >2018-10-02 10:41:43,171 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:43,195 p=605 u=mistral | TASK [Run deployment CephStorageArtifactsDeploy] ******************************* >2018-10-02 10:41:43,196 p=605 u=mistral | Tuesday 02 October 2018 10:41:43 -0400 (0:00:00.043) 0:02:13.469 ******* >2018-10-02 10:41:43,857 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/2fbf8368-5d12-4a56-b8d2-39c89c67484c.notify.json)", "delta": "0:00:00.463015", "end": "2018-10-02 10:41:43.829048", "rc": 0, "start": "2018-10-02 10:41:43.366033", "stderr": "[2018-10-02 10:41:43,391] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/2fbf8368-5d12-4a56-b8d2-39c89c67484c.json\n[2018-10-02 10:41:43,421] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-10-02 10:41:43,421] (heat-config) [DEBUG] [2018-10-02 10:41:43,412] (heat-config) [INFO] artifact_urls=\n[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f\n[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_action=CREATE\n[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-kxw7gj7kfige-CephStorageArtifactsDeploy-mgkfjd6hcahn-0-cxhxqun7rtyr/fe5231da-19ac-4474-b6b4-84259b1fa270\n[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-10-02 10:41:43,413] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/2fbf8368-5d12-4a56-b8d2-39c89c67484c\n[2018-10-02 10:41:43,418] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-10-02 10:41:43,418] (heat-config) [DEBUG] \n[2018-10-02 10:41:43,418] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/2fbf8368-5d12-4a56-b8d2-39c89c67484c\n\n[2018-10-02 10:41:43,421] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-10-02 10:41:43,422] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2fbf8368-5d12-4a56-b8d2-39c89c67484c.json < /var/lib/heat-config/deployed/2fbf8368-5d12-4a56-b8d2-39c89c67484c.notify.json\n[2018-10-02 10:41:43,823] (heat-config) [INFO] \n[2018-10-02 10:41:43,823] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-10-02 10:41:43,391] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/2fbf8368-5d12-4a56-b8d2-39c89c67484c.json", "[2018-10-02 10:41:43,421] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-10-02 10:41:43,421] (heat-config) [DEBUG] [2018-10-02 10:41:43,412] (heat-config) [INFO] artifact_urls=", "[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", "[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_action=CREATE", "[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-kxw7gj7kfige-CephStorageArtifactsDeploy-mgkfjd6hcahn-0-cxhxqun7rtyr/fe5231da-19ac-4474-b6b4-84259b1fa270", "[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-10-02 10:41:43,413] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/2fbf8368-5d12-4a56-b8d2-39c89c67484c", "[2018-10-02 10:41:43,418] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-10-02 10:41:43,418] (heat-config) [DEBUG] ", "[2018-10-02 10:41:43,418] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/2fbf8368-5d12-4a56-b8d2-39c89c67484c", "", "[2018-10-02 10:41:43,421] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-10-02 10:41:43,422] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2fbf8368-5d12-4a56-b8d2-39c89c67484c.json < /var/lib/heat-config/deployed/2fbf8368-5d12-4a56-b8d2-39c89c67484c.notify.json", "[2018-10-02 10:41:43,823] (heat-config) [INFO] ", "[2018-10-02 10:41:43,823] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:43,884 p=605 u=mistral | TASK [Output for CephStorageArtifactsDeploy] *********************************** >2018-10-02 10:41:43,884 p=605 u=mistral | Tuesday 02 October 2018 10:41:43 -0400 (0:00:00.688) 0:02:14.157 ******* >2018-10-02 10:41:43,948 p=605 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-10-02 10:41:43,391] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/2fbf8368-5d12-4a56-b8d2-39c89c67484c.json", > "[2018-10-02 10:41:43,421] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-10-02 10:41:43,421] (heat-config) [DEBUG] [2018-10-02 10:41:43,412] (heat-config) [INFO] artifact_urls=", > "[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_server_id=fab5596e-6ad9-4ebc-98e9-9493a17a1f8f", > "[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_action=CREATE", > "[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-kxw7gj7kfige-CephStorageArtifactsDeploy-mgkfjd6hcahn-0-cxhxqun7rtyr/fe5231da-19ac-4474-b6b4-84259b1fa270", > "[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-10-02 10:41:43,413] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-10-02 10:41:43,413] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/2fbf8368-5d12-4a56-b8d2-39c89c67484c", > "[2018-10-02 10:41:43,418] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-10-02 10:41:43,418] (heat-config) [DEBUG] ", > "[2018-10-02 10:41:43,418] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/2fbf8368-5d12-4a56-b8d2-39c89c67484c", > "", > "[2018-10-02 10:41:43,421] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-10-02 10:41:43,422] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2fbf8368-5d12-4a56-b8d2-39c89c67484c.json < /var/lib/heat-config/deployed/2fbf8368-5d12-4a56-b8d2-39c89c67484c.notify.json", > "[2018-10-02 10:41:43,823] (heat-config) [INFO] ", > "[2018-10-02 10:41:43,823] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-10-02 10:41:43,977 p=605 u=mistral | TASK [Check-mode for Run deployment CephStorageArtifactsDeploy (changed status indicates deployment would run)] *** >2018-10-02 10:41:43,977 p=605 u=mistral | Tuesday 02 October 2018 10:41:43 -0400 (0:00:00.093) 0:02:14.250 ******* >2018-10-02 10:41:43,994 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:44,001 p=605 u=mistral | PLAY [Host prep steps] ********************************************************* >2018-10-02 10:41:44,051 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:41:44,051 p=605 u=mistral | Tuesday 02 October 2018 10:41:44 -0400 (0:00:00.073) 0:02:14.324 ******* >2018-10-02 10:41:44,116 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:44,117 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:44,136 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:44,143 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:44,271 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/aodh) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/aodh", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:44,438 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/aodh-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/aodh-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:44,468 p=605 u=mistral | TASK [aodh logs readme] ******************************************************** >2018-10-02 10:41:44,468 p=605 u=mistral | Tuesday 02 October 2018 10:41:44 -0400 (0:00:00.417) 0:02:14.741 ******* >2018-10-02 10:41:44,531 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:44,545 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:44,942 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b6cf6dbe054f430c33d39c1a1a88593536d6e659", "msg": "Destination directory /var/log/aodh does not exist"} >2018-10-02 10:41:44,942 p=605 u=mistral | ...ignoring >2018-10-02 10:41:44,969 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:41:44,970 p=605 u=mistral | Tuesday 02 October 2018 10:41:44 -0400 (0:00:00.501) 0:02:15.243 ******* >2018-10-02 10:41:45,033 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:45,046 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:45,172 p=605 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:45,200 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:41:45,201 p=605 u=mistral | Tuesday 02 October 2018 10:41:45 -0400 (0:00:00.230) 0:02:15.474 ******* >2018-10-02 10:41:45,271 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:45,284 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:45,412 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:45,440 p=605 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-10-02 10:41:45,440 p=605 u=mistral | Tuesday 02 October 2018 10:41:45 -0400 (0:00:00.239) 0:02:15.713 ******* >2018-10-02 10:41:45,503 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:45,517 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:45,918 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-10-02 10:41:45,919 p=605 u=mistral | ...ignoring >2018-10-02 10:41:45,945 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:41:45,945 p=605 u=mistral | Tuesday 02 October 2018 10:41:45 -0400 (0:00:00.504) 0:02:16.218 ******* >2018-10-02 10:41:46,006 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:46,007 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:46,032 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:46,038 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:46,161 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/cinder) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:46,328 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/cinder-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/cinder-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:46,354 p=605 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-10-02 10:41:46,354 p=605 u=mistral | Tuesday 02 October 2018 10:41:46 -0400 (0:00:00.409) 0:02:16.627 ******* >2018-10-02 10:41:46,412 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:46,427 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:46,845 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292", "msg": "Destination directory /var/log/cinder does not exist"} >2018-10-02 10:41:46,845 p=605 u=mistral | ...ignoring >2018-10-02 10:41:46,871 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:41:46,871 p=605 u=mistral | Tuesday 02 October 2018 10:41:46 -0400 (0:00:00.517) 0:02:17.144 ******* >2018-10-02 10:41:46,931 p=605 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:46,932 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:46,950 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:46,957 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:47,071 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/cinder) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:47,239 p=605 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:47,269 p=605 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-10-02 10:41:47,269 p=605 u=mistral | Tuesday 02 October 2018 10:41:47 -0400 (0:00:00.397) 0:02:17.542 ******* >2018-10-02 10:41:47,332 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:47,347 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:47,470 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:47,496 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:41:47,497 p=605 u=mistral | Tuesday 02 October 2018 10:41:47 -0400 (0:00:00.227) 0:02:17.770 ******* >2018-10-02 10:41:47,557 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:47,576 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:47,704 p=605 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:47,732 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:41:47,732 p=605 u=mistral | Tuesday 02 October 2018 10:41:47 -0400 (0:00:00.235) 0:02:18.005 ******* >2018-10-02 10:41:47,801 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:47,802 p=605 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:47,818 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:47,824 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:41:47,945 p=605 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:48,113 p=605 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:48,142 p=605 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-10-02 10:41:48,142 p=605 u=mistral | Tuesday 02 October 2018 10:41:48 -0400 (0:00:00.410) 0:02:18.416 ******* >2018-10-02 10:41:48,206 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,207 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"cinder_enable_iscsi_backend": false}, "changed": false} >2018-10-02 10:41:48,220 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,246 p=605 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-10-02 10:41:48,246 p=605 u=mistral | Tuesday 02 October 2018 10:41:48 -0400 (0:00:00.103) 0:02:18.519 ******* >2018-10-02 10:41:48,277 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,306 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,319 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,347 p=605 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-10-02 10:41:48,347 p=605 u=mistral | Tuesday 02 October 2018 10:41:48 -0400 (0:00:00.100) 0:02:18.620 ******* >2018-10-02 10:41:48,377 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,411 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,425 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,453 p=605 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 10:41:48,453 p=605 u=mistral | Tuesday 02 October 2018 10:41:48 -0400 (0:00:00.105) 0:02:18.726 ******* >2018-10-02 10:41:48,516 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,518 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >2018-10-02 10:41:48,529 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,556 p=605 u=mistral | TASK [include_role] ************************************************************ >2018-10-02 10:41:48,556 p=605 u=mistral | Tuesday 02 October 2018 10:41:48 -0400 (0:00:00.103) 0:02:18.830 ******* >2018-10-02 10:41:48,617 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,630 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:48,719 p=605 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-10-02 10:41:48,719 p=605 u=mistral | Tuesday 02 October 2018 10:41:48 -0400 (0:00:00.162) 0:02:18.992 ******* >2018-10-02 10:41:49,150 p=605 u=mistral | changed: [controller-0] => {"changed": true} >2018-10-02 10:41:49,180 p=605 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-10-02 10:41:49,180 p=605 u=mistral | Tuesday 02 October 2018 10:41:49 -0400 (0:00:00.461) 0:02:19.453 ******* >2018-10-02 10:41:49,700 p=605 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-75.git8633870.el7_5.x86_64 providing docker is already installed"]} >2018-10-02 10:41:49,727 p=605 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-10-02 10:41:49,727 p=605 u=mistral | Tuesday 02 October 2018 10:41:49 -0400 (0:00:00.546) 0:02:20.000 ******* >2018-10-02 10:41:49,941 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:49,968 p=605 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-10-02 10:41:49,968 p=605 u=mistral | Tuesday 02 October 2018 10:41:49 -0400 (0:00:00.241) 0:02:20.241 ******* >2018-10-02 10:41:50,379 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-10-02 10:41:50,400 p=605 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-10-02 10:41:50,400 p=605 u=mistral | Tuesday 02 October 2018 10:41:50 -0400 (0:00:00.431) 0:02:20.673 ******* >2018-10-02 10:41:50,619 p=605 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 10:41:50,642 p=605 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-10-02 10:41:50,642 p=605 u=mistral | Tuesday 02 October 2018 10:41:50 -0400 (0:00:00.242) 0:02:20.915 ******* >2018-10-02 10:41:50,863 p=605 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-10-02 10:41:50,887 p=605 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-10-02 10:41:50,887 p=605 u=mistral | Tuesday 02 October 2018 10:41:50 -0400 (0:00:00.245) 0:02:21.160 ******* >2018-10-02 10:41:51,187 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:41:51,233 p=605 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-10-02 10:41:51,233 p=605 u=mistral | Tuesday 02 October 2018 10:41:51 -0400 (0:00:00.345) 0:02:21.506 ******* >2018-10-02 10:41:51,868 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491311.36-89089964786742/source", "state": "file", "uid": 0} >2018-10-02 10:41:51,892 p=605 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-10-02 10:41:51,892 p=605 u=mistral | Tuesday 02 October 2018 10:41:51 -0400 (0:00:00.659) 0:02:22.166 ******* >2018-10-02 10:41:52,252 p=605 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 10:41:52,278 p=605 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-10-02 10:41:52,278 p=605 u=mistral | Tuesday 02 October 2018 10:41:52 -0400 (0:00:00.386) 0:02:22.552 ******* >2018-10-02 10:41:52,503 p=605 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 10:41:52,528 p=605 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-10-02 10:41:52,528 p=605 u=mistral | Tuesday 02 October 2018 10:41:52 -0400 (0:00:00.249) 0:02:22.801 ******* >2018-10-02 10:41:52,882 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-10-02 10:41:52,911 p=605 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-10-02 10:41:52,911 p=605 u=mistral | Tuesday 02 October 2018 10:41:52 -0400 (0:00:00.383) 0:02:23.184 ******* >2018-10-02 10:41:52,935 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:41:52,936 p=605 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-10-02 10:41:52,936 p=605 u=mistral | Tuesday 02 October 2018 10:41:52 -0400 (0:00:00.025) 0:02:23.210 ******* >2018-10-02 10:41:53,198 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.003309", "end": "2018-10-02 10:41:53.135322", "rc": 0, "start": "2018-10-02 10:41:53.132013", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} >2018-10-02 10:41:53,199 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >2018-10-02 10:41:53,199 p=605 u=mistral | Tuesday 02 October 2018 10:41:53 -0400 (0:00:00.262) 0:02:23.473 ******* >2018-10-02 10:41:53,632 p=605 u=mistral | ok: [controller-0] => {"changed": false, "name": null, "status": {}} >2018-10-02 10:41:53,633 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >2018-10-02 10:41:53,633 p=605 u=mistral | Tuesday 02 October 2018 10:41:53 -0400 (0:00:00.434) 0:02:23.907 ******* >2018-10-02 10:41:55,174 p=605 u=mistral | changed: [controller-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "basic.target rhel-push-plugin.socket system.slice registries.service network.target docker-storage-setup.service systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "rhel-push-plugin.socket registries.service docker-cleanup.timer basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-10-02 10:41:55,176 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >2018-10-02 10:41:55,176 p=605 u=mistral | Tuesday 02 October 2018 10:41:55 -0400 (0:00:01.542) 0:02:25.449 ******* >2018-10-02 10:41:55,247 p=605 u=mistral | Pausing for 10 seconds >2018-10-02 10:41:55,247 p=605 u=mistral | (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >2018-10-02 10:41:55,247 p=605 u=mistral | [container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >2018-10-02 10:42:05,251 p=605 u=mistral | ok: [controller-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-10-02 10:41:55.247153", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-10-02 10:42:05.247311", "user_input": ""} >2018-10-02 10:42:05,252 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >2018-10-02 10:42:05,252 p=605 u=mistral | Tuesday 02 October 2018 10:42:05 -0400 (0:00:10.076) 0:02:35.526 ******* >2018-10-02 10:42:05,538 p=605 u=mistral | changed: [controller-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.044294", "end": "2018-10-02 10:42:05.496161", "rc": 0, "start": "2018-10-02 10:42:05.451867", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} >2018-10-02 10:42:05,563 p=605 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-10-02 10:42:05,564 p=605 u=mistral | Tuesday 02 October 2018 10:42:05 -0400 (0:00:00.311) 0:02:35.837 ******* >2018-10-02 10:42:05,867 p=605 u=mistral | changed: [controller-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Tue 2018-10-02 10:41:55 EDT", "ActiveEnterTimestampMonotonic": "358928702", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target rhel-push-plugin.socket system.slice registries.service network.target docker-storage-setup.service systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 10:41:53 EDT", "AssertTimestampMonotonic": "357755780", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 10:41:53 EDT", "ConditionTimestampMonotonic": "357755780", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "15154", "ExecMainStartTimestamp": "Tue 2018-10-02 10:41:53 EDT", "ExecMainStartTimestampMonotonic": "357757276", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Tue 2018-10-02 10:41:53 EDT] ; stop_time=[n/a] ; pid=15154 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 10:41:53 EDT", "InactiveExitTimestampMonotonic": "357757355", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "15154", "MemoryAccounting": "no", "MemoryCurrent": "66965504", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "rhel-push-plugin.socket registries.service docker-cleanup.timer basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "25", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Tue 2018-10-02 10:41:55 EDT", "WatchdogTimestampMonotonic": "358928648", "WatchdogUSec": "0"}} >2018-10-02 10:42:05,894 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:05,894 p=605 u=mistral | Tuesday 02 October 2018 10:42:05 -0400 (0:00:00.330) 0:02:36.167 ******* >2018-10-02 10:42:05,956 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:05,974 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,101 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/glance) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/glance", "mode": "0755", "owner": "root", "path": "/var/log/containers/glance", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:06,127 p=605 u=mistral | TASK [glance logs readme] ****************************************************** >2018-10-02 10:42:06,127 p=605 u=mistral | Tuesday 02 October 2018 10:42:06 -0400 (0:00:00.232) 0:02:36.400 ******* >2018-10-02 10:42:06,193 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,207 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,587 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "e368ae3272baeb19e1113009ea5dae00e797c919", "msg": "Destination directory /var/log/glance does not exist"} >2018-10-02 10:42:06,587 p=605 u=mistral | ...ignoring >2018-10-02 10:42:06,613 p=605 u=mistral | TASK [Set glance remote_file_path fact] **************************************** >2018-10-02 10:42:06,614 p=605 u=mistral | Tuesday 02 October 2018 10:42:06 -0400 (0:00:00.486) 0:02:36.887 ******* >2018-10-02 10:42:06,645 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,677 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,689 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,715 p=605 u=mistral | TASK [Create glance remote_file_path] ****************************************** >2018-10-02 10:42:06,715 p=605 u=mistral | Tuesday 02 October 2018 10:42:06 -0400 (0:00:00.100) 0:02:36.988 ******* >2018-10-02 10:42:06,744 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,774 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,788 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,814 p=605 u=mistral | TASK [stat] ******************************************************************** >2018-10-02 10:42:06,815 p=605 u=mistral | Tuesday 02 October 2018 10:42:06 -0400 (0:00:00.099) 0:02:37.088 ******* >2018-10-02 10:42:06,844 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,874 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,887 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,912 p=605 u=mistral | TASK [copy] ******************************************************************** >2018-10-02 10:42:06,913 p=605 u=mistral | Tuesday 02 October 2018 10:42:06 -0400 (0:00:00.097) 0:02:37.186 ******* >2018-10-02 10:42:06,942 p=605 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,976 p=605 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:06,995 p=605 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,019 p=605 u=mistral | TASK [Mount glance Netapp share] *********************************************** >2018-10-02 10:42:07,019 p=605 u=mistral | Tuesday 02 October 2018 10:42:07 -0400 (0:00:00.106) 0:02:37.292 ******* >2018-10-02 10:42:07,050 p=605 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,082 p=605 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,099 p=605 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,124 p=605 u=mistral | TASK [Mount NFS on host] ******************************************************* >2018-10-02 10:42:07,124 p=605 u=mistral | Tuesday 02 October 2018 10:42:07 -0400 (0:00:00.105) 0:02:37.397 ******* >2018-10-02 10:42:07,154 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,182 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,195 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,219 p=605 u=mistral | TASK [Mount Node Staging Location] ********************************************* >2018-10-02 10:42:07,219 p=605 u=mistral | Tuesday 02 October 2018 10:42:07 -0400 (0:00:00.094) 0:02:37.492 ******* >2018-10-02 10:42:07,248 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,279 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,296 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,321 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:07,321 p=605 u=mistral | Tuesday 02 October 2018 10:42:07 -0400 (0:00:00.101) 0:02:37.594 ******* >2018-10-02 10:42:07,389 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,391 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,410 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,416 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,601 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/gnocchi) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/gnocchi", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:07,768 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/gnocchi-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/gnocchi-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:07,796 p=605 u=mistral | TASK [gnocchi logs readme] ***************************************************** >2018-10-02 10:42:07,797 p=605 u=mistral | Tuesday 02 October 2018 10:42:07 -0400 (0:00:00.475) 0:02:38.070 ******* >2018-10-02 10:42:07,861 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:07,877 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:08,343 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "2f6114e0f135d7222e70a07579ab0b2b6f967ff8", "msg": "Destination directory /var/log/gnocchi does not exist"} >2018-10-02 10:42:08,343 p=605 u=mistral | ...ignoring >2018-10-02 10:42:08,416 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:08,416 p=605 u=mistral | Tuesday 02 October 2018 10:42:08 -0400 (0:00:00.619) 0:02:38.689 ******* >2018-10-02 10:42:08,480 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:08,494 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:08,620 p=605 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:08,647 p=605 u=mistral | TASK [get parameters] ********************************************************** >2018-10-02 10:42:08,647 p=605 u=mistral | Tuesday 02 October 2018 10:42:08 -0400 (0:00:00.231) 0:02:38.920 ******* >2018-10-02 10:42:08,708 p=605 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:42:08,710 p=605 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:42:08,723 p=605 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:42:08,749 p=605 u=mistral | TASK [get DeployedSSLCertificatePath attributes] ******************************* >2018-10-02 10:42:08,750 p=605 u=mistral | Tuesday 02 October 2018 10:42:08 -0400 (0:00:00.102) 0:02:39.023 ******* >2018-10-02 10:42:08,782 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:08,813 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:08,836 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:08,862 p=605 u=mistral | TASK [Assign bootstrap node] *************************************************** >2018-10-02 10:42:08,862 p=605 u=mistral | Tuesday 02 October 2018 10:42:08 -0400 (0:00:00.112) 0:02:39.135 ******* >2018-10-02 10:42:08,894 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:08,923 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:08,938 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:08,965 p=605 u=mistral | TASK [set is_bootstrap_node fact] ********************************************** >2018-10-02 10:42:08,965 p=605 u=mistral | Tuesday 02 October 2018 10:42:08 -0400 (0:00:00.103) 0:02:39.239 ******* >2018-10-02 10:42:08,996 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,027 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,039 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,065 p=605 u=mistral | TASK [get haproxy status] ****************************************************** >2018-10-02 10:42:09,065 p=605 u=mistral | Tuesday 02 October 2018 10:42:09 -0400 (0:00:00.099) 0:02:39.338 ******* >2018-10-02 10:42:09,096 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,126 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,145 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,173 p=605 u=mistral | TASK [get pacemaker status] **************************************************** >2018-10-02 10:42:09,174 p=605 u=mistral | Tuesday 02 October 2018 10:42:09 -0400 (0:00:00.108) 0:02:39.447 ******* >2018-10-02 10:42:09,204 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,234 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,247 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,272 p=605 u=mistral | TASK [get docker status] ******************************************************* >2018-10-02 10:42:09,273 p=605 u=mistral | Tuesday 02 October 2018 10:42:09 -0400 (0:00:00.098) 0:02:39.546 ******* >2018-10-02 10:42:09,303 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,332 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,346 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,372 p=605 u=mistral | TASK [get container_id] ******************************************************** >2018-10-02 10:42:09,372 p=605 u=mistral | Tuesday 02 October 2018 10:42:09 -0400 (0:00:00.099) 0:02:39.645 ******* >2018-10-02 10:42:09,404 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,436 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,448 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,472 p=605 u=mistral | TASK [get pcs resource name for haproxy container] ***************************** >2018-10-02 10:42:09,472 p=605 u=mistral | Tuesday 02 October 2018 10:42:09 -0400 (0:00:00.099) 0:02:39.745 ******* >2018-10-02 10:42:09,500 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,529 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,542 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,566 p=605 u=mistral | TASK [remove DeployedSSLCertificatePath if is dir] ***************************** >2018-10-02 10:42:09,566 p=605 u=mistral | Tuesday 02 October 2018 10:42:09 -0400 (0:00:00.094) 0:02:39.839 ******* >2018-10-02 10:42:09,595 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,625 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,638 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,665 p=605 u=mistral | TASK [push certificate content] ************************************************ >2018-10-02 10:42:09,665 p=605 u=mistral | Tuesday 02 October 2018 10:42:09 -0400 (0:00:00.098) 0:02:39.938 ******* >2018-10-02 10:42:09,695 p=605 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:42:09,732 p=605 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:42:09,744 p=605 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:42:09,770 p=605 u=mistral | TASK [set certificate ownership] *********************************************** >2018-10-02 10:42:09,770 p=605 u=mistral | Tuesday 02 October 2018 10:42:09 -0400 (0:00:00.104) 0:02:40.043 ******* >2018-10-02 10:42:09,802 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,833 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,847 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,873 p=605 u=mistral | TASK [reload haproxy if enabled] *********************************************** >2018-10-02 10:42:09,873 p=605 u=mistral | Tuesday 02 October 2018 10:42:09 -0400 (0:00:00.103) 0:02:40.147 ******* >2018-10-02 10:42:09,906 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,936 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,949 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:09,974 p=605 u=mistral | TASK [restart pacemaker resource for haproxy] ********************************** >2018-10-02 10:42:09,974 p=605 u=mistral | Tuesday 02 October 2018 10:42:09 -0400 (0:00:00.100) 0:02:40.247 ******* >2018-10-02 10:42:10,003 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,031 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,049 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,078 p=605 u=mistral | TASK [set kolla_dir fact] ****************************************************** >2018-10-02 10:42:10,078 p=605 u=mistral | Tuesday 02 October 2018 10:42:10 -0400 (0:00:00.104) 0:02:40.352 ******* >2018-10-02 10:42:10,108 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,136 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,148 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,174 p=605 u=mistral | TASK [assert {{ kolla_dir }}{{ cert_path }} exists] **************************** >2018-10-02 10:42:10,174 p=605 u=mistral | Tuesday 02 October 2018 10:42:10 -0400 (0:00:00.096) 0:02:40.448 ******* >2018-10-02 10:42:10,203 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,231 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,243 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,267 p=605 u=mistral | TASK [set certificate group on host via container] ***************************** >2018-10-02 10:42:10,267 p=605 u=mistral | Tuesday 02 October 2018 10:42:10 -0400 (0:00:00.092) 0:02:40.540 ******* >2018-10-02 10:42:10,294 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,322 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,339 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,366 p=605 u=mistral | TASK [copy certificate from kolla directory to final location] ***************** >2018-10-02 10:42:10,366 p=605 u=mistral | Tuesday 02 October 2018 10:42:10 -0400 (0:00:00.098) 0:02:40.639 ******* >2018-10-02 10:42:10,396 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,424 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,436 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,460 p=605 u=mistral | TASK [send restart order to haproxy container] ********************************* >2018-10-02 10:42:10,460 p=605 u=mistral | Tuesday 02 October 2018 10:42:10 -0400 (0:00:00.094) 0:02:40.733 ******* >2018-10-02 10:42:10,489 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,518 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,531 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,557 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:42:10,557 p=605 u=mistral | Tuesday 02 October 2018 10:42:10 -0400 (0:00:00.097) 0:02:40.831 ******* >2018-10-02 10:42:10,621 p=605 u=mistral | skipping: [compute-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,648 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,768 p=605 u=mistral | ok: [controller-0] => (item=/var/lib/haproxy) => {"changed": false, "gid": 188, "group": "haproxy", "item": "/var/lib/haproxy", "mode": "0755", "owner": "haproxy", "path": "/var/lib/haproxy", "secontext": "system_u:object_r:haproxy_var_lib_t:s0", "size": 6, "state": "directory", "uid": 188} >2018-10-02 10:42:10,801 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:10,801 p=605 u=mistral | Tuesday 02 October 2018 10:42:10 -0400 (0:00:00.243) 0:02:41.074 ******* >2018-10-02 10:42:10,870 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,872 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,890 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:10,896 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:11,015 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/heat) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:11,180 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:11,210 p=605 u=mistral | TASK [heat logs readme] ******************************************************** >2018-10-02 10:42:11,210 p=605 u=mistral | Tuesday 02 October 2018 10:42:11 -0400 (0:00:00.408) 0:02:41.483 ******* >2018-10-02 10:42:11,274 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:11,289 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:11,687 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "d30ca3bda176434d31659e7379616dd162ddb246", "msg": "Destination directory /var/log/heat does not exist"} >2018-10-02 10:42:11,688 p=605 u=mistral | ...ignoring >2018-10-02 10:42:11,712 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:11,712 p=605 u=mistral | Tuesday 02 October 2018 10:42:11 -0400 (0:00:00.501) 0:02:41.985 ******* >2018-10-02 10:42:11,779 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:11,781 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:11,797 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:11,804 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:11,923 p=605 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:12,088 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api-cfn", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api-cfn", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:12,115 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:12,115 p=605 u=mistral | Tuesday 02 October 2018 10:42:12 -0400 (0:00:00.403) 0:02:42.388 ******* >2018-10-02 10:42:12,181 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:12,195 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:12,319 p=605 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:12,347 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:12,347 p=605 u=mistral | Tuesday 02 October 2018 10:42:12 -0400 (0:00:00.232) 0:02:42.620 ******* >2018-10-02 10:42:12,414 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:12,415 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:12,442 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:12,447 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:12,563 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/horizon) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:12,726 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/horizon) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:12,754 p=605 u=mistral | TASK [horizon logs readme] ***************************************************** >2018-10-02 10:42:12,754 p=605 u=mistral | Tuesday 02 October 2018 10:42:12 -0400 (0:00:00.407) 0:02:43.028 ******* >2018-10-02 10:42:12,820 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:12,836 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:13,228 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ac324739761cb36b925d6e309482e26f7fe49b91", "msg": "Destination directory /var/log/horizon does not exist"} >2018-10-02 10:42:13,228 p=605 u=mistral | ...ignoring >2018-10-02 10:42:13,254 p=605 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-10-02 10:42:13,254 p=605 u=mistral | Tuesday 02 October 2018 10:42:13 -0400 (0:00:00.499) 0:02:43.528 ******* >2018-10-02 10:42:13,316 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:13,329 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:13,464 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1538491313.5635169, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1537979153.5750983, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 2886261, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "1807870409", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-10-02 10:42:13,492 p=605 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-10-02 10:42:13,493 p=605 u=mistral | Tuesday 02 October 2018 10:42:13 -0400 (0:00:00.238) 0:02:43.766 ******* >2018-10-02 10:42:13,556 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:13,570 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:13,832 p=605 u=mistral | changed: [controller-0] => {"changed": true, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Tue 2018-10-02 10:35:59 EDT", "ActiveEnterTimestampMonotonic": "3588349", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 10:35:59 EDT", "AssertTimestampMonotonic": "3588097", "Backlog": "128", "Before": "iscsid.service shutdown.target sockets.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 10:35:59 EDT", "ConditionTimestampMonotonic": "3588097", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 10:35:59 EDT", "InactiveExitTimestampMonotonic": "3588349", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127792", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "sockets.target", "Wants": "-.slice"}} >2018-10-02 10:42:13,859 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:13,859 p=605 u=mistral | Tuesday 02 October 2018 10:42:13 -0400 (0:00:00.366) 0:02:44.133 ******* >2018-10-02 10:42:13,922 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:13,923 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:13,944 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:13,947 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:14,120 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/keystone) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:14,271 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/keystone) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:14,300 p=605 u=mistral | TASK [keystone logs readme] **************************************************** >2018-10-02 10:42:14,300 p=605 u=mistral | Tuesday 02 October 2018 10:42:14 -0400 (0:00:00.440) 0:02:44.573 ******* >2018-10-02 10:42:14,408 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:14,422 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:14,807 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "910be882addb6df99267e9bd303f6d9bf658562e", "msg": "Destination directory /var/log/keystone does not exist"} >2018-10-02 10:42:14,807 p=605 u=mistral | ...ignoring >2018-10-02 10:42:14,835 p=605 u=mistral | TASK [memcached logs readme] *************************************************** >2018-10-02 10:42:14,835 p=605 u=mistral | Tuesday 02 October 2018 10:42:14 -0400 (0:00:00.534) 0:02:45.108 ******* >2018-10-02 10:42:14,897 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:14,911 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:15,327 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "3b6f3952a077d2e5003df30c8c439478917cb6c4", "dest": "/var/log/memcached-readme.txt", "gid": 0, "group": "root", "md5sum": "ffdb1524e5789470856ae32ded4e2f80", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_log_t:s0", "size": 48, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491334.88-90235698748826/source", "state": "file", "uid": 0} >2018-10-02 10:42:15,355 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:42:15,355 p=605 u=mistral | Tuesday 02 October 2018 10:42:15 -0400 (0:00:00.520) 0:02:45.628 ******* >2018-10-02 10:42:15,422 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:15,423 p=605 u=mistral | skipping: [compute-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:15,440 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:15,446 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:15,560 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/mysql) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/mysql", "mode": "0755", "owner": "root", "path": "/var/log/containers/mysql", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:15,722 p=605 u=mistral | ok: [controller-0] => (item=/var/lib/mysql) => {"changed": false, "gid": 27, "group": "mysql", "item": "/var/lib/mysql", "mode": "0755", "owner": "mysql", "path": "/var/lib/mysql", "secontext": "system_u:object_r:mysqld_db_t:s0", "size": 6, "state": "directory", "uid": 27} >2018-10-02 10:42:15,749 p=605 u=mistral | TASK [mysql logs readme] ******************************************************* >2018-10-02 10:42:15,750 p=605 u=mistral | Tuesday 02 October 2018 10:42:15 -0400 (0:00:00.394) 0:02:46.023 ******* >2018-10-02 10:42:15,808 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:15,822 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:16,247 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "de8fb5fe96200ab286121f8a09419702bd693743", "dest": "/var/log/mariadb/readme.txt", "gid": 0, "group": "root", "md5sum": "1f3e80eed7060dfe5ee49c8063244c53", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:mysqld_log_t:s0", "size": 78, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491335.8-277872050427315/source", "state": "file", "uid": 0} >2018-10-02 10:42:16,275 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:16,275 p=605 u=mistral | Tuesday 02 October 2018 10:42:16 -0400 (0:00:00.525) 0:02:46.548 ******* >2018-10-02 10:42:16,340 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:16,341 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:16,362 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:16,368 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:16,480 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/neutron) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:16,642 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/neutron-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/neutron-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:16,670 p=605 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-10-02 10:42:16,670 p=605 u=mistral | Tuesday 02 October 2018 10:42:16 -0400 (0:00:00.395) 0:02:46.943 ******* >2018-10-02 10:42:16,737 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:16,753 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:17,123 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-10-02 10:42:17,124 p=605 u=mistral | ...ignoring >2018-10-02 10:42:17,151 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:17,151 p=605 u=mistral | Tuesday 02 October 2018 10:42:17 -0400 (0:00:00.481) 0:02:47.425 ******* >2018-10-02 10:42:17,216 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:17,235 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:17,357 p=605 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:17,385 p=605 u=mistral | TASK [create /var/lib/neutron] ************************************************* >2018-10-02 10:42:17,385 p=605 u=mistral | Tuesday 02 October 2018 10:42:17 -0400 (0:00:00.233) 0:02:47.658 ******* >2018-10-02 10:42:17,450 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:17,463 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:17,590 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/neutron", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:17,618 p=605 u=mistral | TASK [Copy in cleanup script] ************************************************** >2018-10-02 10:42:17,618 p=605 u=mistral | Tuesday 02 October 2018 10:42:17 -0400 (0:00:00.232) 0:02:47.891 ******* >2018-10-02 10:42:17,681 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:17,694 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:18,127 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "659dc874a58142f127a275d34c6d90d27b3a4150", "dest": "/usr/libexec/neutron-cleanup", "gid": 0, "group": "root", "md5sum": "e5ee7754f01168fb9053e4dd66eff58c", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:bin_t:s0", "size": 675, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491337.66-11188624609304/source", "state": "file", "uid": 0} >2018-10-02 10:42:18,154 p=605 u=mistral | TASK [Copy in cleanup service] ************************************************* >2018-10-02 10:42:18,154 p=605 u=mistral | Tuesday 02 October 2018 10:42:18 -0400 (0:00:00.535) 0:02:48.427 ******* >2018-10-02 10:42:18,223 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:18,237 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:18,682 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "1950d05f025c3db49014a49372fce15fa9014693", "dest": "/usr/lib/systemd/system/neutron-cleanup.service", "gid": 0, "group": "root", "md5sum": "0dd683a7d38da6dfb537927032db6f22", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:neutron_unit_file_t:s0", "size": 231, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491338.2-254394740513329/source", "state": "file", "uid": 0} >2018-10-02 10:42:18,710 p=605 u=mistral | TASK [Enabling the cleanup service] ******************************************** >2018-10-02 10:42:18,710 p=605 u=mistral | Tuesday 02 October 2018 10:42:18 -0400 (0:00:00.555) 0:02:48.983 ******* >2018-10-02 10:42:18,774 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:18,788 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:18,994 p=605 u=mistral | changed: [controller-0] => {"changed": true, "enabled": true, "name": "neutron-cleanup", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "basic.target systemd-journald.socket system.slice openvswitch.service network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "docker.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "no", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Neutron cleanup on startup", "DevicePolicy": "auto", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/libexec/neutron-cleanup ; argv[]=/usr/libexec/neutron-cleanup ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/neutron-cleanup.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "neutron-cleanup.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127792", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "neutron-cleanup.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "yes", "RemainAfterExit": "no", "Requires": "basic.target", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-10-02 10:42:19,023 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:19,024 p=605 u=mistral | Tuesday 02 October 2018 10:42:19 -0400 (0:00:00.313) 0:02:49.297 ******* >2018-10-02 10:42:19,089 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:19,090 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:19,113 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:19,118 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:19,228 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/nova) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:19,390 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:19,418 p=605 u=mistral | TASK [nova logs readme] ******************************************************** >2018-10-02 10:42:19,418 p=605 u=mistral | Tuesday 02 October 2018 10:42:19 -0400 (0:00:00.394) 0:02:49.691 ******* >2018-10-02 10:42:19,481 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:19,495 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:19,882 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-10-02 10:42:19,883 p=605 u=mistral | ...ignoring >2018-10-02 10:42:19,910 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:19,910 p=605 u=mistral | Tuesday 02 October 2018 10:42:19 -0400 (0:00:00.492) 0:02:50.184 ******* >2018-10-02 10:42:19,974 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:19,988 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,109 p=605 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:20,137 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:20,137 p=605 u=mistral | Tuesday 02 October 2018 10:42:20 -0400 (0:00:00.226) 0:02:50.410 ******* >2018-10-02 10:42:20,206 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,207 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,224 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,230 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,354 p=605 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:20,521 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-placement", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-placement", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:20,549 p=605 u=mistral | TASK [NTP settings] ************************************************************ >2018-10-02 10:42:20,549 p=605 u=mistral | Tuesday 02 October 2018 10:42:20 -0400 (0:00:00.412) 0:02:50.822 ******* >2018-10-02 10:42:20,614 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,616 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["clock.redhat.com"]}, "changed": false} >2018-10-02 10:42:20,627 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,654 p=605 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-10-02 10:42:20,654 p=605 u=mistral | Tuesday 02 October 2018 10:42:20 -0400 (0:00:00.104) 0:02:50.927 ******* >2018-10-02 10:42:20,686 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,717 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,735 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,768 p=605 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-10-02 10:42:20,768 p=605 u=mistral | Tuesday 02 October 2018 10:42:20 -0400 (0:00:00.113) 0:02:51.041 ******* >2018-10-02 10:42:20,835 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:20,850 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:27,874 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": ["ntpdate", "-u", "clock.redhat.com"], "delta": "0:00:06.900485", "end": "2018-10-02 10:42:27.842726", "rc": 0, "start": "2018-10-02 10:42:20.942241", "stderr": "", "stderr_lines": [], "stdout": " 2 Oct 10:42:27 ntpdate[16555]: adjust time server 10.11.160.238 offset 0.005016 sec", "stdout_lines": [" 2 Oct 10:42:27 ntpdate[16555]: adjust time server 10.11.160.238 offset 0.005016 sec"]} >2018-10-02 10:42:27,902 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:27,902 p=605 u=mistral | Tuesday 02 October 2018 10:42:27 -0400 (0:00:07.134) 0:02:58.175 ******* >2018-10-02 10:42:27,968 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:27,970 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:27,986 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:27,993 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:28,188 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/panko) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/panko", "mode": "0755", "owner": "root", "path": "/var/log/containers/panko", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:28,359 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/panko-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/panko-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:28,390 p=605 u=mistral | TASK [panko logs readme] ******************************************************* >2018-10-02 10:42:28,390 p=605 u=mistral | Tuesday 02 October 2018 10:42:28 -0400 (0:00:00.488) 0:02:58.664 ******* >2018-10-02 10:42:28,473 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:28,487 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:28,985 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "903397bbd82e9b1f53087e3d7e8975d851857ce2", "msg": "Destination directory /var/log/panko does not exist"} >2018-10-02 10:42:28,985 p=605 u=mistral | ...ignoring >2018-10-02 10:42:29,010 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:42:29,010 p=605 u=mistral | Tuesday 02 October 2018 10:42:29 -0400 (0:00:00.619) 0:02:59.283 ******* >2018-10-02 10:42:29,076 p=605 u=mistral | skipping: [compute-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:29,077 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:29,094 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:29,101 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:29,235 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/rabbitmq) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/lib/rabbitmq", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:29,404 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/rabbitmq) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/log/containers/rabbitmq", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:29,431 p=605 u=mistral | TASK [rabbitmq logs readme] **************************************************** >2018-10-02 10:42:29,431 p=605 u=mistral | Tuesday 02 October 2018 10:42:29 -0400 (0:00:00.420) 0:02:59.705 ******* >2018-10-02 10:42:29,493 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:29,507 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:29,914 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ee241f2199f264c9d0f384cf389fe255e8bf8a77", "msg": "Destination directory /var/log/rabbitmq does not exist"} >2018-10-02 10:42:29,914 p=605 u=mistral | ...ignoring >2018-10-02 10:42:29,941 p=605 u=mistral | TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] *** >2018-10-02 10:42:29,941 p=605 u=mistral | Tuesday 02 October 2018 10:42:29 -0400 (0:00:00.509) 0:03:00.214 ******* >2018-10-02 10:42:29,998 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:30,013 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:30,188 p=605 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "echo 'export ERL_EPMD_ADDRESS=127.0.0.1' > /etc/rabbitmq/rabbitmq-env.conf\n echo 'export ERL_EPMD_PORT=4370' >> /etc/rabbitmq/rabbitmq-env.conf\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done", "delta": "0:00:00.045168", "end": "2018-10-02 10:42:30.160234", "rc": 0, "start": "2018-10-02 10:42:30.115066", "stderr": "/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory\n/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "stderr_lines": ["/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory"], "stdout": "", "stdout_lines": []} >2018-10-02 10:42:30,215 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:42:30,215 p=605 u=mistral | Tuesday 02 October 2018 10:42:30 -0400 (0:00:00.273) 0:03:00.488 ******* >2018-10-02 10:42:30,274 p=605 u=mistral | skipping: [compute-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:30,276 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:30,277 p=605 u=mistral | skipping: [compute-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:30,292 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:30,298 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:30,305 p=605 u=mistral | skipping: [ceph-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:30,421 p=605 u=mistral | ok: [controller-0] => (item=/var/lib/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/lib/redis", "mode": "0750", "owner": "redis", "path": "/var/lib/redis", "secontext": "system_u:object_r:redis_var_lib_t:s0", "size": 6, "state": "directory", "uid": 992} >2018-10-02 10:42:30,583 p=605 u=mistral | changed: [controller-0] => (item=/var/log/containers/redis) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/redis", "mode": "0755", "owner": "root", "path": "/var/log/containers/redis", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:30,747 p=605 u=mistral | ok: [controller-0] => (item=/var/run/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/run/redis", "mode": "0755", "owner": "redis", "path": "/var/run/redis", "secontext": "system_u:object_r:redis_var_run_t:s0", "size": 40, "state": "directory", "uid": 992} >2018-10-02 10:42:30,774 p=605 u=mistral | TASK [redis logs readme] ******************************************************* >2018-10-02 10:42:30,774 p=605 u=mistral | Tuesday 02 October 2018 10:42:30 -0400 (0:00:00.559) 0:03:01.047 ******* >2018-10-02 10:42:30,834 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:30,848 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:31,287 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "42d03af8abf93e87fdb3fc69702638fc81d943fb", "dest": "/var/log/redis/readme.txt", "gid": 0, "group": "root", "md5sum": "26fc3dbfb40d3414a608e987cc577748", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:redis_log_t:s0", "size": 78, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491350.82-219252263832045/source", "state": "file", "uid": 0} >2018-10-02 10:42:31,314 p=605 u=mistral | TASK [create /var/lib/sahara] ************************************************** >2018-10-02 10:42:31,314 p=605 u=mistral | Tuesday 02 October 2018 10:42:31 -0400 (0:00:00.540) 0:03:01.587 ******* >2018-10-02 10:42:31,374 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:31,389 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:31,524 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/sahara", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:31,551 p=605 u=mistral | TASK [create persistent sahara logs directory] ********************************* >2018-10-02 10:42:31,551 p=605 u=mistral | Tuesday 02 October 2018 10:42:31 -0400 (0:00:00.237) 0:03:01.825 ******* >2018-10-02 10:42:31,611 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:31,635 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:31,758 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/sahara", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:31,783 p=605 u=mistral | TASK [sahara logs readme] ****************************************************** >2018-10-02 10:42:31,784 p=605 u=mistral | Tuesday 02 October 2018 10:42:31 -0400 (0:00:00.232) 0:03:02.057 ******* >2018-10-02 10:42:31,846 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:31,860 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:32,277 p=605 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b0212a1177fa4a88502d17a1cbc31198040cf047", "msg": "Destination directory /var/log/sahara does not exist"} >2018-10-02 10:42:32,277 p=605 u=mistral | ...ignoring >2018-10-02 10:42:32,304 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:42:32,305 p=605 u=mistral | Tuesday 02 October 2018 10:42:32 -0400 (0:00:00.520) 0:03:02.578 ******* >2018-10-02 10:42:32,368 p=605 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:32,371 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:32,386 p=605 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:32,392 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:32,542 p=605 u=mistral | changed: [controller-0] => (item=/srv/node) => {"changed": true, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:32,705 p=605 u=mistral | changed: [controller-0] => (item=/var/log/swift) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:32,738 p=605 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-10-02 10:42:32,738 p=605 u=mistral | Tuesday 02 October 2018 10:42:32 -0400 (0:00:00.433) 0:03:03.011 ******* >2018-10-02 10:42:32,815 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:32,830 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:32,953 p=605 u=mistral | changed: [controller-0] => {"changed": true, "dest": "/var/log/containers/swift", "gid": 0, "group": "root", "mode": "0777", "owner": "root", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 14, "src": "/var/log/swift", "state": "link", "uid": 0} >2018-10-02 10:42:32,979 p=605 u=mistral | TASK [Check if rsyslog exists] ************************************************* >2018-10-02 10:42:32,980 p=605 u=mistral | Tuesday 02 October 2018 10:42:32 -0400 (0:00:00.241) 0:03:03.253 ******* >2018-10-02 10:42:33,040 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:33,053 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:33,193 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1538490962.4605484, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1537979117.8760984, "dev": 64514, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 588, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0755", "mtime": 1537975062.799, "nlink": 2, "path": "/etc/rsyslog.d", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 50, "uid": 0, "version": "18446744072318778956", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true}} >2018-10-02 10:42:33,220 p=605 u=mistral | TASK [Forward logging to swift.log file] *************************************** >2018-10-02 10:42:33,220 p=605 u=mistral | Tuesday 02 October 2018 10:42:33 -0400 (0:00:00.240) 0:03:03.494 ******* >2018-10-02 10:42:33,278 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:33,304 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:33,751 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "828097d22e649626706b267b5a61f05e49999586", "dest": "/etc/rsyslog.d/openstack-swift.conf", "gid": 0, "group": "root", "md5sum": "2118142de3156b2432c5c12816a4967c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:syslog_conf_t:s0", "size": 138, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491353.27-234365089981354/source", "state": "file", "uid": 0} >2018-10-02 10:42:33,779 p=605 u=mistral | TASK [Restart rsyslogd service after logging conf change] ********************** >2018-10-02 10:42:33,779 p=605 u=mistral | Tuesday 02 October 2018 10:42:33 -0400 (0:00:00.558) 0:03:04.053 ******* >2018-10-02 10:42:33,800 p=605 u=mistral | [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using >`result|changed` instead use `result is changed`. This feature will be removed >in version 2.9. Deprecation warnings can be disabled by setting >deprecation_warnings=False in ansible.cfg. >2018-10-02 10:42:33,843 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:33,857 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,062 p=605 u=mistral | changed: [controller-0] => {"changed": true, "name": "rsyslog", "state": "started", "status": {"ActiveEnterTimestamp": "Tue 2018-10-02 10:36:02 EDT", "ActiveEnterTimestampMonotonic": "6288381", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "network.target network-online.target system.slice basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 10:36:02 EDT", "AssertTimestampMonotonic": "6235457", "Before": "multi-user.target shutdown.target pacemaker.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 10:36:02 EDT", "ConditionTimestampMonotonic": "6235457", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/rsyslog.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "System Logging Service", "DevicePolicy": "auto", "Documentation": "man:rsyslogd(8) http://www.rsyslog.com/doc/", "EnvironmentFile": "/etc/sysconfig/rsyslog (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "1732", "ExecMainStartTimestamp": "Tue 2018-10-02 10:36:02 EDT", "ExecMainStartTimestampMonotonic": "6236724", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/sbin/rsyslogd ; argv[]=/usr/sbin/rsyslogd -n $SYSLOGD_OPTIONS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/rsyslog.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "rsyslog.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 10:36:02 EDT", "InactiveExitTimestampMonotonic": "6236760", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127792", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "1732", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "rsyslog.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "basic.target", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0066", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "network.target network-online.target system.slice", "WatchdogTimestamp": "Tue 2018-10-02 10:36:02 EDT", "WatchdogTimestampMonotonic": "6288293", "WatchdogUSec": "0"}} >2018-10-02 10:42:34,089 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:42:34,089 p=605 u=mistral | Tuesday 02 October 2018 10:42:34 -0400 (0:00:00.309) 0:03:04.362 ******* >2018-10-02 10:42:34,151 p=605 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,153 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,154 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,172 p=605 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,179 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,183 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,288 p=605 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:34,444 p=605 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:34,609 p=605 u=mistral | ok: [controller-0] => (item=/var/log/containers) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers", "mode": "0755", "owner": "root", "path": "/var/log/containers", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 244, "state": "directory", "uid": 0} >2018-10-02 10:42:34,637 p=605 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-10-02 10:42:34,637 p=605 u=mistral | Tuesday 02 October 2018 10:42:34 -0400 (0:00:00.548) 0:03:04.911 ******* >2018-10-02 10:42:34,701 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,703 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"swift_use_local_disks": true}, "changed": false} >2018-10-02 10:42:34,715 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,741 p=605 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-10-02 10:42:34,742 p=605 u=mistral | Tuesday 02 October 2018 10:42:34 -0400 (0:00:00.104) 0:03:05.015 ******* >2018-10-02 10:42:34,804 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,818 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:34,951 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/srv/node/d1", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:34,978 p=605 u=mistral | TASK [swift logs readme] ******************************************************* >2018-10-02 10:42:34,978 p=605 u=mistral | Tuesday 02 October 2018 10:42:34 -0400 (0:00:00.236) 0:03:05.251 ******* >2018-10-02 10:42:35,045 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:35,059 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:35,522 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "42510a6de124722d6efbc2b1bb038bfe97e5b6d3", "dest": "/var/log/swift/readme.txt", "gid": 0, "group": "root", "md5sum": "23163287d564762945ee1738f049dc10", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_log_t:s0", "size": 116, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491355.03-243694294130056/source", "state": "file", "uid": 0} >2018-10-02 10:42:35,550 p=605 u=mistral | TASK [Set fact for SwiftRawDisks] ********************************************** >2018-10-02 10:42:35,550 p=605 u=mistral | Tuesday 02 October 2018 10:42:35 -0400 (0:00:00.571) 0:03:05.823 ******* >2018-10-02 10:42:35,614 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:35,616 p=605 u=mistral | ok: [controller-0] => {"ansible_facts": {"swift_raw_disks": {}}, "changed": false} >2018-10-02 10:42:35,628 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:35,655 p=605 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-10-02 10:42:35,656 p=605 u=mistral | Tuesday 02 October 2018 10:42:35 -0400 (0:00:00.105) 0:03:05.929 ******* >2018-10-02 10:42:35,723 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:35,743 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:35,773 p=605 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-10-02 10:42:35,774 p=605 u=mistral | Tuesday 02 October 2018 10:42:35 -0400 (0:00:00.117) 0:03:06.047 ******* >2018-10-02 10:42:35,840 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:35,855 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:35,882 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:35,883 p=605 u=mistral | Tuesday 02 October 2018 10:42:35 -0400 (0:00:00.108) 0:03:06.156 ******* >2018-10-02 10:42:35,915 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:35,962 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:36,121 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:36,149 p=605 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-10-02 10:42:36,150 p=605 u=mistral | Tuesday 02 October 2018 10:42:36 -0400 (0:00:00.267) 0:03:06.423 ******* >2018-10-02 10:42:36,181 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:36,236 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:36,681 p=605 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-10-02 10:42:36,681 p=605 u=mistral | ...ignoring >2018-10-02 10:42:36,708 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:36,708 p=605 u=mistral | Tuesday 02 October 2018 10:42:36 -0400 (0:00:00.558) 0:03:06.981 ******* >2018-10-02 10:42:36,741 p=605 u=mistral | skipping: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:36,798 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:36,958 p=605 u=mistral | changed: [compute-0] => (item=/var/log/containers/neutron) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:36,988 p=605 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-10-02 10:42:36,988 p=605 u=mistral | Tuesday 02 October 2018 10:42:36 -0400 (0:00:00.280) 0:03:07.261 ******* >2018-10-02 10:42:37,024 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:37,075 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:37,520 p=605 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-10-02 10:42:37,520 p=605 u=mistral | ...ignoring >2018-10-02 10:42:37,550 p=605 u=mistral | TASK [Copy in cleanup script] ************************************************** >2018-10-02 10:42:37,551 p=605 u=mistral | Tuesday 02 October 2018 10:42:37 -0400 (0:00:00.562) 0:03:07.824 ******* >2018-10-02 10:42:37,585 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:37,635 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:38,176 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "659dc874a58142f127a275d34c6d90d27b3a4150", "dest": "/usr/libexec/neutron-cleanup", "gid": 0, "group": "root", "md5sum": "e5ee7754f01168fb9053e4dd66eff58c", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:bin_t:s0", "size": 675, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491357.69-44807825030584/source", "state": "file", "uid": 0} >2018-10-02 10:42:38,204 p=605 u=mistral | TASK [Copy in cleanup service] ************************************************* >2018-10-02 10:42:38,204 p=605 u=mistral | Tuesday 02 October 2018 10:42:38 -0400 (0:00:00.653) 0:03:08.477 ******* >2018-10-02 10:42:38,236 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:38,284 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:38,797 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "1950d05f025c3db49014a49372fce15fa9014693", "dest": "/usr/lib/systemd/system/neutron-cleanup.service", "gid": 0, "group": "root", "md5sum": "0dd683a7d38da6dfb537927032db6f22", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:neutron_unit_file_t:s0", "size": 231, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491358.33-158563949515930/source", "state": "file", "uid": 0} >2018-10-02 10:42:38,827 p=605 u=mistral | TASK [Enabling the cleanup service] ******************************************** >2018-10-02 10:42:38,827 p=605 u=mistral | Tuesday 02 October 2018 10:42:38 -0400 (0:00:00.623) 0:03:09.101 ******* >2018-10-02 10:42:38,913 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:38,961 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:39,212 p=605 u=mistral | changed: [compute-0] => {"changed": true, "enabled": true, "name": "neutron-cleanup", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "systemd-journald.socket network.target basic.target openvswitch.service system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target docker.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "no", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Neutron cleanup on startup", "DevicePolicy": "auto", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/libexec/neutron-cleanup ; argv[]=/usr/libexec/neutron-cleanup ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/neutron-cleanup.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "neutron-cleanup.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "neutron-cleanup.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "yes", "RemainAfterExit": "no", "Requires": "basic.target", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-10-02 10:42:39,240 p=605 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 10:42:39,240 p=605 u=mistral | Tuesday 02 October 2018 10:42:39 -0400 (0:00:00.412) 0:03:09.513 ******* >2018-10-02 10:42:39,271 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:39,317 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >2018-10-02 10:42:39,321 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:39,348 p=605 u=mistral | TASK [include_role] ************************************************************ >2018-10-02 10:42:39,348 p=605 u=mistral | Tuesday 02 October 2018 10:42:39 -0400 (0:00:00.107) 0:03:09.621 ******* >2018-10-02 10:42:39,379 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:39,427 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:39,480 p=605 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-10-02 10:42:39,480 p=605 u=mistral | Tuesday 02 October 2018 10:42:39 -0400 (0:00:00.132) 0:03:09.754 ******* >2018-10-02 10:42:39,708 p=605 u=mistral | changed: [compute-0] => {"changed": true} >2018-10-02 10:42:39,730 p=605 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-10-02 10:42:39,730 p=605 u=mistral | Tuesday 02 October 2018 10:42:39 -0400 (0:00:00.249) 0:03:10.003 ******* >2018-10-02 10:42:40,241 p=605 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-75.git8633870.el7_5.x86_64 providing docker is already installed"]} >2018-10-02 10:42:40,263 p=605 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-10-02 10:42:40,263 p=605 u=mistral | Tuesday 02 October 2018 10:42:40 -0400 (0:00:00.533) 0:03:10.537 ******* >2018-10-02 10:42:40,477 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:40,500 p=605 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-10-02 10:42:40,500 p=605 u=mistral | Tuesday 02 October 2018 10:42:40 -0400 (0:00:00.236) 0:03:10.774 ******* >2018-10-02 10:42:40,738 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-10-02 10:42:40,759 p=605 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-10-02 10:42:40,760 p=605 u=mistral | Tuesday 02 October 2018 10:42:40 -0400 (0:00:00.259) 0:03:11.033 ******* >2018-10-02 10:42:41,012 p=605 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 10:42:41,035 p=605 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-10-02 10:42:41,035 p=605 u=mistral | Tuesday 02 October 2018 10:42:41 -0400 (0:00:00.275) 0:03:11.308 ******* >2018-10-02 10:42:41,272 p=605 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-10-02 10:42:41,294 p=605 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-10-02 10:42:41,295 p=605 u=mistral | Tuesday 02 October 2018 10:42:41 -0400 (0:00:00.259) 0:03:11.568 ******* >2018-10-02 10:42:41,504 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:41,557 p=605 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-10-02 10:42:41,557 p=605 u=mistral | Tuesday 02 October 2018 10:42:41 -0400 (0:00:00.262) 0:03:11.831 ******* >2018-10-02 10:42:42,134 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491361.61-264511704295570/source", "state": "file", "uid": 0} >2018-10-02 10:42:42,154 p=605 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-10-02 10:42:42,154 p=605 u=mistral | Tuesday 02 October 2018 10:42:42 -0400 (0:00:00.597) 0:03:12.428 ******* >2018-10-02 10:42:42,380 p=605 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 10:42:42,402 p=605 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-10-02 10:42:42,402 p=605 u=mistral | Tuesday 02 October 2018 10:42:42 -0400 (0:00:00.247) 0:03:12.675 ******* >2018-10-02 10:42:42,644 p=605 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 10:42:42,666 p=605 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-10-02 10:42:42,666 p=605 u=mistral | Tuesday 02 October 2018 10:42:42 -0400 (0:00:00.263) 0:03:12.939 ******* >2018-10-02 10:42:42,898 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-10-02 10:42:42,923 p=605 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-10-02 10:42:42,924 p=605 u=mistral | Tuesday 02 October 2018 10:42:42 -0400 (0:00:00.257) 0:03:13.197 ******* >2018-10-02 10:42:42,946 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:42,948 p=605 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-10-02 10:42:42,948 p=605 u=mistral | Tuesday 02 October 2018 10:42:42 -0400 (0:00:00.024) 0:03:13.221 ******* >2018-10-02 10:42:43,190 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002073", "end": "2018-10-02 10:42:43.140604", "rc": 0, "start": "2018-10-02 10:42:43.138531", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} >2018-10-02 10:42:43,191 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >2018-10-02 10:42:43,191 p=605 u=mistral | Tuesday 02 October 2018 10:42:43 -0400 (0:00:00.243) 0:03:13.465 ******* >2018-10-02 10:42:43,473 p=605 u=mistral | ok: [compute-0] => {"changed": false, "name": null, "status": {}} >2018-10-02 10:42:43,474 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >2018-10-02 10:42:43,474 p=605 u=mistral | Tuesday 02 October 2018 10:42:43 -0400 (0:00:00.282) 0:03:13.747 ******* >2018-10-02 10:42:44,992 p=605 u=mistral | changed: [compute-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "network.target neutron-cleanup.service system.slice rhel-push-plugin.socket registries.service basic.target systemd-journald.socket docker-storage-setup.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer registries.service rhel-push-plugin.socket basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-10-02 10:42:44,993 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >2018-10-02 10:42:44,993 p=605 u=mistral | Tuesday 02 October 2018 10:42:44 -0400 (0:00:01.519) 0:03:15.267 ******* >2018-10-02 10:42:45,058 p=605 u=mistral | Pausing for 10 seconds >2018-10-02 10:42:45,058 p=605 u=mistral | (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >2018-10-02 10:42:45,058 p=605 u=mistral | [container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >2018-10-02 10:42:55,061 p=605 u=mistral | ok: [compute-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-10-02 10:42:45.057702", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-10-02 10:42:55.057868", "user_input": ""} >2018-10-02 10:42:55,062 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >2018-10-02 10:42:55,062 p=605 u=mistral | Tuesday 02 October 2018 10:42:55 -0400 (0:00:10.068) 0:03:25.335 ******* >2018-10-02 10:42:55,335 p=605 u=mistral | changed: [compute-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.035366", "end": "2018-10-02 10:42:55.302053", "rc": 0, "start": "2018-10-02 10:42:55.266687", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} >2018-10-02 10:42:55,356 p=605 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-10-02 10:42:55,357 p=605 u=mistral | Tuesday 02 October 2018 10:42:55 -0400 (0:00:00.294) 0:03:25.630 ******* >2018-10-02 10:42:55,722 p=605 u=mistral | changed: [compute-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Tue 2018-10-02 10:42:44 EDT", "ActiveEnterTimestampMonotonic": "419486770", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "network.target neutron-cleanup.service system.slice rhel-push-plugin.socket registries.service basic.target systemd-journald.socket docker-storage-setup.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 10:42:43 EDT", "AssertTimestampMonotonic": "418318705", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 10:42:43 EDT", "ConditionTimestampMonotonic": "418318705", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "14687", "ExecMainStartTimestamp": "Tue 2018-10-02 10:42:43 EDT", "ExecMainStartTimestampMonotonic": "418320043", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Tue 2018-10-02 10:42:43 EDT] ; stop_time=[n/a] ; pid=14687 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 10:42:43 EDT", "InactiveExitTimestampMonotonic": "418320080", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "14687", "MemoryAccounting": "no", "MemoryCurrent": "62455808", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer registries.service rhel-push-plugin.socket basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "20", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Tue 2018-10-02 10:42:44 EDT", "WatchdogTimestampMonotonic": "419486724", "WatchdogUSec": "0"}} >2018-10-02 10:42:55,749 p=605 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-10-02 10:42:55,749 p=605 u=mistral | Tuesday 02 October 2018 10:42:55 -0400 (0:00:00.392) 0:03:26.023 ******* >2018-10-02 10:42:55,780 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:55,827 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:56,050 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"atime": 1538491359.1534562, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1537979153.5750983, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 2886261, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "1807870409", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-10-02 10:42:56,078 p=605 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-10-02 10:42:56,078 p=605 u=mistral | Tuesday 02 October 2018 10:42:56 -0400 (0:00:00.328) 0:03:26.351 ******* >2018-10-02 10:42:56,110 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:56,156 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:56,451 p=605 u=mistral | changed: [compute-0] => {"changed": true, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Tue 2018-10-02 10:35:48 EDT", "ActiveEnterTimestampMonotonic": "3022942", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 10:35:48 EDT", "AssertTimestampMonotonic": "3022691", "Backlog": "128", "Before": "shutdown.target iscsid.service sockets.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 10:35:48 EDT", "ConditionTimestampMonotonic": "3022691", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 10:35:48 EDT", "InactiveExitTimestampMonotonic": "3022942", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "sockets.target", "Wants": "-.slice"}} >2018-10-02 10:42:56,483 p=605 u=mistral | TASK [create persistent logs directory] **************************************** >2018-10-02 10:42:56,483 p=605 u=mistral | Tuesday 02 October 2018 10:42:56 -0400 (0:00:00.405) 0:03:26.756 ******* >2018-10-02 10:42:56,559 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:56,610 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:56,762 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:56,790 p=605 u=mistral | TASK [nova logs readme] ******************************************************** >2018-10-02 10:42:56,790 p=605 u=mistral | Tuesday 02 October 2018 10:42:56 -0400 (0:00:00.306) 0:03:27.063 ******* >2018-10-02 10:42:56,821 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:56,870 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:57,285 p=605 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-10-02 10:42:57,285 p=605 u=mistral | ...ignoring >2018-10-02 10:42:57,312 p=605 u=mistral | TASK [Mount Nova NFS Share] **************************************************** >2018-10-02 10:42:57,313 p=605 u=mistral | Tuesday 02 October 2018 10:42:57 -0400 (0:00:00.522) 0:03:27.586 ******* >2018-10-02 10:42:57,344 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:57,378 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:57,391 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:57,423 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:42:57,423 p=605 u=mistral | Tuesday 02 October 2018 10:42:57 -0400 (0:00:00.110) 0:03:27.696 ******* >2018-10-02 10:42:57,473 p=605 u=mistral | skipping: [controller-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:57,475 p=605 u=mistral | skipping: [controller-0] => (item=/var/lib/nova/instances) => {"changed": false, "item": "/var/lib/nova/instances", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:57,508 p=605 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:57,529 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:57,534 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova/instances) => {"changed": false, "item": "/var/lib/nova/instances", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:57,541 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:57,684 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/nova) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/nova", "mode": "0755", "owner": "root", "path": "/var/lib/nova", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:57,844 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/nova/instances) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/nova/instances", "mode": "0755", "owner": "root", "path": "/var/lib/nova/instances", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:58,013 p=605 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-10-02 10:42:58,041 p=605 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-10-02 10:42:58,042 p=605 u=mistral | Tuesday 02 October 2018 10:42:58 -0400 (0:00:00.618) 0:03:28.315 ******* >2018-10-02 10:42:58,073 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,119 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,281 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:58,309 p=605 u=mistral | TASK [is Instance HA enabled] ************************************************** >2018-10-02 10:42:58,309 p=605 u=mistral | Tuesday 02 October 2018 10:42:58 -0400 (0:00:00.267) 0:03:28.582 ******* >2018-10-02 10:42:58,342 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,385 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"instance_ha_enabled": false}, "changed": false} >2018-10-02 10:42:58,390 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,415 p=605 u=mistral | TASK [prepare Instance HA script directory] ************************************ >2018-10-02 10:42:58,416 p=605 u=mistral | Tuesday 02 October 2018 10:42:58 -0400 (0:00:00.106) 0:03:28.689 ******* >2018-10-02 10:42:58,447 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,477 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,489 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,514 p=605 u=mistral | TASK [install Instance HA script that runs nova-compute] *********************** >2018-10-02 10:42:58,514 p=605 u=mistral | Tuesday 02 October 2018 10:42:58 -0400 (0:00:00.098) 0:03:28.787 ******* >2018-10-02 10:42:58,544 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,576 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,592 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,622 p=605 u=mistral | TASK [Get list of instance HA compute nodes] *********************************** >2018-10-02 10:42:58,622 p=605 u=mistral | Tuesday 02 October 2018 10:42:58 -0400 (0:00:00.108) 0:03:28.895 ******* >2018-10-02 10:42:58,657 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,690 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,704 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,731 p=605 u=mistral | TASK [If instance HA is enabled on the node activate the evacuation completed check] *** >2018-10-02 10:42:58,731 p=605 u=mistral | Tuesday 02 October 2018 10:42:58 -0400 (0:00:00.108) 0:03:29.004 ******* >2018-10-02 10:42:58,763 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,795 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,811 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,837 p=605 u=mistral | TASK [create libvirt persistent data directories] ****************************** >2018-10-02 10:42:58,838 p=605 u=mistral | Tuesday 02 October 2018 10:42:58 -0400 (0:00:00.106) 0:03:29.111 ******* >2018-10-02 10:42:58,871 p=605 u=mistral | skipping: [controller-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,872 p=605 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,906 p=605 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,907 p=605 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,908 p=605 u=mistral | skipping: [controller-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,931 p=605 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,936 p=605 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,943 p=605 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,948 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:58,955 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-10-02 10:42:59,082 p=605 u=mistral | ok: [compute-0] => (item=/etc/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt", "mode": "0700", "owner": "root", "path": "/etc/libvirt", "secontext": "system_u:object_r:virt_etc_t:s0", "size": 215, "state": "directory", "uid": 0} >2018-10-02 10:42:59,246 p=605 u=mistral | ok: [compute-0] => (item=/etc/libvirt/secrets) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/secrets", "mode": "0700", "owner": "root", "path": "/etc/libvirt/secrets", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:59,410 p=605 u=mistral | ok: [compute-0] => (item=/etc/libvirt/qemu) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/qemu", "mode": "0700", "owner": "root", "path": "/etc/libvirt/qemu", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 22, "state": "directory", "uid": 0} >2018-10-02 10:42:59,575 p=605 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-10-02 10:42:59,740 p=605 u=mistral | changed: [compute-0] => (item=/var/log/containers/libvirt) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/libvirt", "mode": "0755", "owner": "root", "path": "/var/log/containers/libvirt", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:42:59,770 p=605 u=mistral | TASK [ensure qemu group is present on the host] ******************************** >2018-10-02 10:42:59,770 p=605 u=mistral | Tuesday 02 October 2018 10:42:59 -0400 (0:00:00.932) 0:03:30.043 ******* >2018-10-02 10:42:59,801 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:42:59,849 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:00,006 p=605 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "name": "qemu", "state": "present", "system": false} >2018-10-02 10:43:00,034 p=605 u=mistral | TASK [ensure qemu user is present on the host] ********************************* >2018-10-02 10:43:00,034 p=605 u=mistral | Tuesday 02 October 2018 10:43:00 -0400 (0:00:00.263) 0:03:30.307 ******* >2018-10-02 10:43:00,067 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:00,122 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:00,554 p=605 u=mistral | ok: [compute-0] => {"append": false, "changed": false, "comment": "qemu user", "group": 107, "home": "/", "move_home": false, "name": "qemu", "shell": "/sbin/nologin", "state": "present", "uid": 107} >2018-10-02 10:43:00,581 p=605 u=mistral | TASK [create directory for vhost-user sockets with qemu ownership] ************* >2018-10-02 10:43:00,581 p=605 u=mistral | Tuesday 02 October 2018 10:43:00 -0400 (0:00:00.546) 0:03:30.854 ******* >2018-10-02 10:43:00,612 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:00,659 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:00,815 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 107, "group": "qemu", "mode": "0755", "owner": "qemu", "path": "/var/lib/vhost_sockets", "secontext": "system_u:object_r:virt_cache_t:s0", "size": 6, "state": "directory", "uid": 107} >2018-10-02 10:43:00,843 p=605 u=mistral | TASK [check if libvirt is installed] ******************************************* >2018-10-02 10:43:00,843 p=605 u=mistral | Tuesday 02 October 2018 10:43:00 -0400 (0:00:00.262) 0:03:31.117 ******* >2018-10-02 10:43:00,875 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:00,924 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,118 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/usr/bin/rpm", "-q", "libvirt-daemon"], "delta": "0:00:00.040082", "end": "2018-10-02 10:43:01.089782", "failed_when_result": false, "rc": 0, "start": "2018-10-02 10:43:01.049700", "stderr": "", "stderr_lines": [], "stdout": "libvirt-daemon-3.9.0-14.el7_5.8.x86_64", "stdout_lines": ["libvirt-daemon-3.9.0-14.el7_5.8.x86_64"]} >2018-10-02 10:43:01,146 p=605 u=mistral | TASK [make sure libvirt services are disabled] ********************************* >2018-10-02 10:43:01,147 p=605 u=mistral | Tuesday 02 October 2018 10:43:01 -0400 (0:00:00.303) 0:03:31.420 ******* >2018-10-02 10:43:01,183 p=605 u=mistral | skipping: [controller-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,185 p=605 u=mistral | skipping: [controller-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,240 p=605 u=mistral | skipping: [ceph-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,248 p=605 u=mistral | skipping: [ceph-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,494 p=605 u=mistral | changed: [compute-0] => (item=libvirtd.service) => {"changed": true, "enabled": false, "item": "libvirtd.service", "name": "libvirtd.service", "state": "stopped", "status": {"ActiveEnterTimestamp": "Tue 2018-10-02 10:35:50 EDT", "ActiveEnterTimestampMonotonic": "4858220", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "virtlogd.socket virtlockd.service dbus.service apparmor.service basic.target virtlogd.service local-fs.target remote-fs.target iscsid.service virtlockd.socket systemd-journald.socket network.target system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 10:35:50 EDT", "AssertTimestampMonotonic": "4632658", "Before": "shutdown.target libvirt-guests.service multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 10:35:50 EDT", "ConditionTimestampMonotonic": "4632658", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/libvirtd.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Virtualization daemon", "DevicePolicy": "auto", "Documentation": "man:libvirtd(8) https://libvirt.org", "EnvironmentFile": "/etc/sysconfig/libvirtd (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "1172", "ExecMainStartTimestamp": "Tue 2018-10-02 10:35:50 EDT", "ExecMainStartTimestampMonotonic": "4633886", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/libvirtd ; argv[]=/usr/sbin/libvirtd $LIBVIRTD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/libvirtd.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "libvirtd.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 10:35:50 EDT", "InactiveExitTimestampMonotonic": "4633929", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "8192", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "1172", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "libvirtd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "basic.target virtlockd.socket virtlogd.socket", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "32768", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target libvirt-guests.service", "Wants": "system.slice", "WatchdogTimestamp": "Tue 2018-10-02 10:35:50 EDT", "WatchdogTimestampMonotonic": "4858181", "WatchdogUSec": "0"}} >2018-10-02 10:43:01,682 p=605 u=mistral | changed: [compute-0] => (item=virtlogd.socket) => {"changed": true, "enabled": false, "item": "virtlogd.socket", "name": "virtlogd.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Tue 2018-10-02 10:35:48 EDT", "ActiveEnterTimestampMonotonic": "3022213", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "sysinit.target -.slice -.mount", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 10:35:48 EDT", "AssertTimestampMonotonic": "3021164", "Backlog": "128", "Before": "virtlogd.service sockets.target libvirtd.service shutdown.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 10:35:48 EDT", "ConditionTimestampMonotonic": "3021164", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Virtual machine log manager socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "FragmentPath": "/usr/lib/systemd/system/virtlogd.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "virtlogd.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 10:35:48 EDT", "InactiveExitTimestampMonotonic": "3022213", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "ListenStream": "/var/run/libvirt/virtlogd-sock", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "virtlogd.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "RequiredBy": "virtlogd.service libvirtd.service", "Requires": "sysinit.target -.mount", "RequiresMountsFor": "/var/run/libvirt/virtlogd-sock", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "virtlogd.service", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-10-02 10:43:01,712 p=605 u=mistral | TASK [NTP settings] ************************************************************ >2018-10-02 10:43:01,712 p=605 u=mistral | Tuesday 02 October 2018 10:43:01 -0400 (0:00:00.565) 0:03:31.986 ******* >2018-10-02 10:43:01,744 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,787 p=605 u=mistral | ok: [compute-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["clock.redhat.com"]}, "changed": false} >2018-10-02 10:43:01,789 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,814 p=605 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-10-02 10:43:01,814 p=605 u=mistral | Tuesday 02 October 2018 10:43:01 -0400 (0:00:00.101) 0:03:32.087 ******* >2018-10-02 10:43:01,844 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,875 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,892 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,920 p=605 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-10-02 10:43:01,920 p=605 u=mistral | Tuesday 02 October 2018 10:43:01 -0400 (0:00:00.106) 0:03:32.193 ******* >2018-10-02 10:43:01,949 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:01,993 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,051 p=605 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["ntpdate", "-u", "clock.redhat.com"], "delta": "0:00:06.900167", "end": "2018-10-02 10:43:09.022938", "rc": 0, "start": "2018-10-02 10:43:02.122771", "stderr": "", "stderr_lines": [], "stdout": " 2 Oct 10:43:09 ntpdate[15196]: adjust time server 10.11.160.238 offset 0.000737 sec", "stdout_lines": [" 2 Oct 10:43:09 ntpdate[15196]: adjust time server 10.11.160.238 offset 0.000737 sec"]} >2018-10-02 10:43:09,079 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:43:09,079 p=605 u=mistral | Tuesday 02 October 2018 10:43:09 -0400 (0:00:07.158) 0:03:39.352 ******* >2018-10-02 10:43:09,114 p=605 u=mistral | skipping: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,115 p=605 u=mistral | skipping: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,149 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,151 p=605 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,172 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,175 p=605 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,204 p=605 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-10-02 10:43:09,204 p=605 u=mistral | Tuesday 02 October 2018 10:43:09 -0400 (0:00:00.124) 0:03:39.477 ******* >2018-10-02 10:43:09,235 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,268 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,282 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,310 p=605 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-10-02 10:43:09,310 p=605 u=mistral | Tuesday 02 October 2018 10:43:09 -0400 (0:00:00.106) 0:03:39.584 ******* >2018-10-02 10:43:09,347 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,425 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,439 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,466 p=605 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-10-02 10:43:09,467 p=605 u=mistral | Tuesday 02 October 2018 10:43:09 -0400 (0:00:00.156) 0:03:39.740 ******* >2018-10-02 10:43:09,498 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,530 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,542 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,569 p=605 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-10-02 10:43:09,570 p=605 u=mistral | Tuesday 02 October 2018 10:43:09 -0400 (0:00:00.102) 0:03:39.843 ******* >2018-10-02 10:43:09,602 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,634 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,646 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,673 p=605 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-10-02 10:43:09,673 p=605 u=mistral | Tuesday 02 October 2018 10:43:09 -0400 (0:00:00.103) 0:03:39.947 ******* >2018-10-02 10:43:09,704 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,739 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,751 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,780 p=605 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 10:43:09,780 p=605 u=mistral | Tuesday 02 October 2018 10:43:09 -0400 (0:00:00.106) 0:03:40.054 ******* >2018-10-02 10:43:09,815 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,847 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,861 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,889 p=605 u=mistral | TASK [include_role] ************************************************************ >2018-10-02 10:43:09,889 p=605 u=mistral | Tuesday 02 October 2018 10:43:09 -0400 (0:00:00.108) 0:03:40.162 ******* >2018-10-02 10:43:09,923 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,955 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,970 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:09,995 p=605 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-10-02 10:43:09,995 p=605 u=mistral | Tuesday 02 October 2018 10:43:09 -0400 (0:00:00.105) 0:03:40.268 ******* >2018-10-02 10:43:10,024 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,051 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,069 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,090 p=605 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-10-02 10:43:10,090 p=605 u=mistral | Tuesday 02 October 2018 10:43:10 -0400 (0:00:00.095) 0:03:40.363 ******* >2018-10-02 10:43:10,116 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,142 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,153 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,174 p=605 u=mistral | TASK [NTP settings] ************************************************************ >2018-10-02 10:43:10,175 p=605 u=mistral | Tuesday 02 October 2018 10:43:10 -0400 (0:00:00.084) 0:03:40.448 ******* >2018-10-02 10:43:10,201 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,227 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,238 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,259 p=605 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-10-02 10:43:10,260 p=605 u=mistral | Tuesday 02 October 2018 10:43:10 -0400 (0:00:00.085) 0:03:40.533 ******* >2018-10-02 10:43:10,285 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,311 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,327 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,352 p=605 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-10-02 10:43:10,353 p=605 u=mistral | Tuesday 02 October 2018 10:43:10 -0400 (0:00:00.092) 0:03:40.626 ******* >2018-10-02 10:43:10,380 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,406 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,418 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,441 p=605 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 10:43:10,441 p=605 u=mistral | Tuesday 02 October 2018 10:43:10 -0400 (0:00:00.088) 0:03:40.714 ******* >2018-10-02 10:43:10,468 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,497 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,509 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,533 p=605 u=mistral | TASK [include_role] ************************************************************ >2018-10-02 10:43:10,534 p=605 u=mistral | Tuesday 02 October 2018 10:43:10 -0400 (0:00:00.092) 0:03:40.807 ******* >2018-10-02 10:43:10,567 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,597 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,617 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,652 p=605 u=mistral | TASK [NTP settings] ************************************************************ >2018-10-02 10:43:10,652 p=605 u=mistral | Tuesday 02 October 2018 10:43:10 -0400 (0:00:00.118) 0:03:40.925 ******* >2018-10-02 10:43:10,690 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,719 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,736 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,761 p=605 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-10-02 10:43:10,761 p=605 u=mistral | Tuesday 02 October 2018 10:43:10 -0400 (0:00:00.109) 0:03:41.035 ******* >2018-10-02 10:43:10,793 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,823 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,837 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,863 p=605 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-10-02 10:43:10,863 p=605 u=mistral | Tuesday 02 October 2018 10:43:10 -0400 (0:00:00.101) 0:03:41.136 ******* >2018-10-02 10:43:10,894 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,925 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,938 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:10,967 p=605 u=mistral | TASK [create persistent directories] ******************************************* >2018-10-02 10:43:10,968 p=605 u=mistral | Tuesday 02 October 2018 10:43:10 -0400 (0:00:00.104) 0:03:41.241 ******* >2018-10-02 10:43:11,001 p=605 u=mistral | skipping: [controller-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,003 p=605 u=mistral | skipping: [controller-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,004 p=605 u=mistral | skipping: [controller-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,035 p=605 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,037 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,038 p=605 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,054 p=605 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,059 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,065 p=605 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,095 p=605 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-10-02 10:43:11,095 p=605 u=mistral | Tuesday 02 October 2018 10:43:11 -0400 (0:00:00.127) 0:03:41.368 ******* >2018-10-02 10:43:11,126 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,158 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,171 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,198 p=605 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-10-02 10:43:11,198 p=605 u=mistral | Tuesday 02 October 2018 10:43:11 -0400 (0:00:00.102) 0:03:41.471 ******* >2018-10-02 10:43:11,231 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,262 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,275 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,305 p=605 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-10-02 10:43:11,305 p=605 u=mistral | Tuesday 02 October 2018 10:43:11 -0400 (0:00:00.107) 0:03:41.578 ******* >2018-10-02 10:43:11,338 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,370 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,384 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,411 p=605 u=mistral | TASK [swift logs readme] ******************************************************* >2018-10-02 10:43:11,412 p=605 u=mistral | Tuesday 02 October 2018 10:43:11 -0400 (0:00:00.106) 0:03:41.685 ******* >2018-10-02 10:43:11,443 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,475 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,488 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,515 p=605 u=mistral | TASK [Check if rsyslog exists] ************************************************* >2018-10-02 10:43:11,516 p=605 u=mistral | Tuesday 02 October 2018 10:43:11 -0400 (0:00:00.104) 0:03:41.789 ******* >2018-10-02 10:43:11,548 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,580 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,592 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,623 p=605 u=mistral | TASK [Forward logging to swift.log file] *************************************** >2018-10-02 10:43:11,623 p=605 u=mistral | Tuesday 02 October 2018 10:43:11 -0400 (0:00:00.107) 0:03:41.896 ******* >2018-10-02 10:43:11,656 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,688 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,701 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,728 p=605 u=mistral | TASK [Restart rsyslogd service after logging conf change] ********************** >2018-10-02 10:43:11,729 p=605 u=mistral | Tuesday 02 October 2018 10:43:11 -0400 (0:00:00.105) 0:03:42.002 ******* >2018-10-02 10:43:11,761 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,792 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,806 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,833 p=605 u=mistral | TASK [Set fact for SwiftRawDisks] ********************************************** >2018-10-02 10:43:11,833 p=605 u=mistral | Tuesday 02 October 2018 10:43:11 -0400 (0:00:00.104) 0:03:42.107 ******* >2018-10-02 10:43:11,866 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,897 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,910 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:11,941 p=605 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-10-02 10:43:11,941 p=605 u=mistral | Tuesday 02 October 2018 10:43:11 -0400 (0:00:00.107) 0:03:42.214 ******* >2018-10-02 10:43:12,005 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:12,022 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:12,049 p=605 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-10-02 10:43:12,050 p=605 u=mistral | Tuesday 02 October 2018 10:43:12 -0400 (0:00:00.108) 0:03:42.323 ******* >2018-10-02 10:43:12,112 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:12,128 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:12,155 p=605 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 10:43:12,156 p=605 u=mistral | Tuesday 02 October 2018 10:43:12 -0400 (0:00:00.105) 0:03:42.429 ******* >2018-10-02 10:43:12,189 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:12,222 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:12,271 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >2018-10-02 10:43:12,303 p=605 u=mistral | TASK [include_role] ************************************************************ >2018-10-02 10:43:12,304 p=605 u=mistral | Tuesday 02 October 2018 10:43:12 -0400 (0:00:00.147) 0:03:42.577 ******* >2018-10-02 10:43:12,338 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:12,371 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:12,442 p=605 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-10-02 10:43:12,443 p=605 u=mistral | Tuesday 02 October 2018 10:43:12 -0400 (0:00:00.138) 0:03:42.716 ******* >2018-10-02 10:43:12,668 p=605 u=mistral | changed: [ceph-0] => {"changed": true} >2018-10-02 10:43:12,695 p=605 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-10-02 10:43:12,695 p=605 u=mistral | Tuesday 02 October 2018 10:43:12 -0400 (0:00:00.252) 0:03:42.968 ******* >2018-10-02 10:43:13,290 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-75.git8633870.el7_5.x86_64 providing docker is already installed"]} >2018-10-02 10:43:13,315 p=605 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-10-02 10:43:13,315 p=605 u=mistral | Tuesday 02 October 2018 10:43:13 -0400 (0:00:00.620) 0:03:43.588 ******* >2018-10-02 10:43:13,590 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:13,616 p=605 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-10-02 10:43:13,616 p=605 u=mistral | Tuesday 02 October 2018 10:43:13 -0400 (0:00:00.300) 0:03:43.889 ******* >2018-10-02 10:43:13,928 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-10-02 10:43:13,995 p=605 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-10-02 10:43:13,995 p=605 u=mistral | Tuesday 02 October 2018 10:43:13 -0400 (0:00:00.378) 0:03:44.268 ******* >2018-10-02 10:43:14,236 p=605 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 10:43:14,257 p=605 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-10-02 10:43:14,257 p=605 u=mistral | Tuesday 02 October 2018 10:43:14 -0400 (0:00:00.262) 0:03:44.530 ******* >2018-10-02 10:43:14,508 p=605 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-10-02 10:43:14,530 p=605 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-10-02 10:43:14,530 p=605 u=mistral | Tuesday 02 October 2018 10:43:14 -0400 (0:00:00.273) 0:03:44.804 ******* >2018-10-02 10:43:14,745 p=605 u=mistral | changed: [ceph-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:14,811 p=605 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-10-02 10:43:14,811 p=605 u=mistral | Tuesday 02 October 2018 10:43:14 -0400 (0:00:00.280) 0:03:45.084 ******* >2018-10-02 10:43:15,401 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491394.86-8538698004752/source", "state": "file", "uid": 0} >2018-10-02 10:43:15,424 p=605 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-10-02 10:43:15,425 p=605 u=mistral | Tuesday 02 October 2018 10:43:15 -0400 (0:00:00.613) 0:03:45.698 ******* >2018-10-02 10:43:15,678 p=605 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 10:43:15,703 p=605 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-10-02 10:43:15,703 p=605 u=mistral | Tuesday 02 October 2018 10:43:15 -0400 (0:00:00.278) 0:03:45.976 ******* >2018-10-02 10:43:15,952 p=605 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-10-02 10:43:15,976 p=605 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-10-02 10:43:15,976 p=605 u=mistral | Tuesday 02 October 2018 10:43:15 -0400 (0:00:00.273) 0:03:46.249 ******* >2018-10-02 10:43:16,192 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-10-02 10:43:16,218 p=605 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-10-02 10:43:16,218 p=605 u=mistral | Tuesday 02 October 2018 10:43:16 -0400 (0:00:00.241) 0:03:46.491 ******* >2018-10-02 10:43:16,242 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:16,244 p=605 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-10-02 10:43:16,244 p=605 u=mistral | Tuesday 02 October 2018 10:43:16 -0400 (0:00:00.025) 0:03:46.517 ******* >2018-10-02 10:43:16,499 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002301", "end": "2018-10-02 10:43:16.444164", "rc": 0, "start": "2018-10-02 10:43:16.441863", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} >2018-10-02 10:43:16,500 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >2018-10-02 10:43:16,500 p=605 u=mistral | Tuesday 02 October 2018 10:43:16 -0400 (0:00:00.256) 0:03:46.773 ******* >2018-10-02 10:43:16,828 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "name": null, "status": {}} >2018-10-02 10:43:16,829 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >2018-10-02 10:43:16,829 p=605 u=mistral | Tuesday 02 October 2018 10:43:16 -0400 (0:00:00.328) 0:03:47.102 ******* >2018-10-02 10:43:18,421 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "systemd-journald.socket network.target docker-storage-setup.service basic.target system.slice registries.service rhel-push-plugin.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14903", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service docker-cleanup.timer rhel-push-plugin.socket basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-10-02 10:43:18,423 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >2018-10-02 10:43:18,423 p=605 u=mistral | Tuesday 02 October 2018 10:43:18 -0400 (0:00:01.594) 0:03:48.696 ******* >2018-10-02 10:43:18,488 p=605 u=mistral | Pausing for 10 seconds >2018-10-02 10:43:18,488 p=605 u=mistral | (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >2018-10-02 10:43:18,489 p=605 u=mistral | [container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >2018-10-02 10:43:28,492 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-10-02 10:43:18.488206", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-10-02 10:43:28.488389", "user_input": ""} >2018-10-02 10:43:28,493 p=605 u=mistral | RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >2018-10-02 10:43:28,493 p=605 u=mistral | Tuesday 02 October 2018 10:43:28 -0400 (0:00:10.069) 0:03:58.766 ******* >2018-10-02 10:43:28,773 p=605 u=mistral | changed: [ceph-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.034927", "end": "2018-10-02 10:43:28.734972", "rc": 0, "start": "2018-10-02 10:43:28.700045", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} >2018-10-02 10:43:28,798 p=605 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-10-02 10:43:28,798 p=605 u=mistral | Tuesday 02 October 2018 10:43:28 -0400 (0:00:00.304) 0:03:59.071 ******* >2018-10-02 10:43:29,185 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Tue 2018-10-02 10:43:18 EDT", "ActiveEnterTimestampMonotonic": "444986299", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket network.target docker-storage-setup.service basic.target system.slice registries.service rhel-push-plugin.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Tue 2018-10-02 10:43:17 EDT", "AssertTimestampMonotonic": "443763597", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Tue 2018-10-02 10:43:17 EDT", "ConditionTimestampMonotonic": "443763597", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "14020", "ExecMainStartTimestamp": "Tue 2018-10-02 10:43:17 EDT", "ExecMainStartTimestampMonotonic": "443764841", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Tue 2018-10-02 10:43:17 EDT] ; stop_time=[n/a] ; pid=14020 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Tue 2018-10-02 10:43:17 EDT", "InactiveExitTimestampMonotonic": "443764870", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14903", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "14020", "MemoryAccounting": "no", "MemoryCurrent": "64741376", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service docker-cleanup.timer rhel-push-plugin.socket basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "17", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Tue 2018-10-02 10:43:18 EDT", "WatchdogTimestampMonotonic": "444986248", "WatchdogUSec": "0"}} >2018-10-02 10:43:29,214 p=605 u=mistral | TASK [NTP settings] ************************************************************ >2018-10-02 10:43:29,214 p=605 u=mistral | Tuesday 02 October 2018 10:43:29 -0400 (0:00:00.416) 0:03:59.488 ******* >2018-10-02 10:43:29,248 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:29,282 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:29,383 p=605 u=mistral | ok: [ceph-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["clock.redhat.com"]}, "changed": false} >2018-10-02 10:43:29,423 p=605 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-10-02 10:43:29,423 p=605 u=mistral | Tuesday 02 October 2018 10:43:29 -0400 (0:00:00.208) 0:03:59.696 ******* >2018-10-02 10:43:29,466 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:29,499 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:29,516 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:29,542 p=605 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-10-02 10:43:29,542 p=605 u=mistral | Tuesday 02 October 2018 10:43:29 -0400 (0:00:00.119) 0:03:59.815 ******* >2018-10-02 10:43:29,573 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:29,606 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:36,733 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": ["ntpdate", "-u", "clock.redhat.com"], "delta": "0:00:06.860526", "end": "2018-10-02 10:43:36.706262", "rc": 0, "start": "2018-10-02 10:43:29.845736", "stderr": "", "stderr_lines": [], "stdout": " 2 Oct 10:43:36 ntpdate[14147]: adjust time server 10.11.160.238 offset 0.001936 sec", "stdout_lines": [" 2 Oct 10:43:36 ntpdate[14147]: adjust time server 10.11.160.238 offset 0.001936 sec"]} >2018-10-02 10:43:36,741 p=605 u=mistral | PLAY [External deployment step 1] ********************************************** >2018-10-02 10:43:36,761 p=605 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-10-02 10:43:36,762 p=605 u=mistral | Tuesday 02 October 2018 10:43:36 -0400 (0:00:07.219) 0:04:07.035 ******* >2018-10-02 10:43:36,797 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"blacklisted_hostnames": []}, "changed": false} >2018-10-02 10:43:36,812 p=605 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-10-02 10:43:36,812 p=605 u=mistral | Tuesday 02 October 2018 10:43:36 -0400 (0:00:00.050) 0:04:07.085 ******* >2018-10-02 10:43:36,992 p=605 u=mistral | ok: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": false, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "size": 88, "state": "directory", "uid": 42430} >2018-10-02 10:43:37,129 p=605 u=mistral | ok: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": false, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "size": 69, "state": "directory", "uid": 42430} >2018-10-02 10:43:37,272 p=605 u=mistral | ok: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": false, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "size": 80, "state": "directory", "uid": 42430} >2018-10-02 10:43:37,291 p=605 u=mistral | TASK [generate inventory] ****************************************************** >2018-10-02 10:43:37,291 p=605 u=mistral | Tuesday 02 October 2018 10:43:37 -0400 (0:00:00.478) 0:04:07.564 ******* >2018-10-02 10:43:38,003 p=605 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "6a8fb65ce5927e5a73af4740d213e37f82501bd0", "dest": "/var/lib/mistral/overcloud/ceph-ansible/inventory.yml", "gid": 42430, "group": "mistral", "md5sum": "d48c1126f794b991b15c5c283705007a", "mode": "0644", "owner": "mistral", "size": 526, "src": "/tmp/ansible-/ansible-tmp-1538491417.71-270208461178584/source", "state": "file", "uid": 42430} >2018-10-02 10:43:38,019 p=605 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-10-02 10:43:38,019 p=605 u=mistral | Tuesday 02 October 2018 10:43:38 -0400 (0:00:00.728) 0:04:08.292 ******* >2018-10-02 10:43:38,116 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_all": {"ceph_conf_overrides": {"global": {"osd_pool_default_pg_num": 32, "osd_pool_default_pgp_num": 32, "osd_pool_default_size": 1, "rgw_keystone_accepted_roles": "Member, admin", "rgw_keystone_admin_domain": "default", "rgw_keystone_admin_password": "QCxcxEleE6gqzEZGAy8kTIeiR", "rgw_keystone_admin_project": "service", "rgw_keystone_admin_user": "swift", "rgw_keystone_api_version": 3, "rgw_keystone_implicit_tenants": "true", "rgw_keystone_revocation_interval": "0", "rgw_keystone_url": "http://172.17.1.10:5000", "rgw_s3_auth_use_keystone": "true"}}, "ceph_docker_image": "rhceph", "ceph_docker_image_tag": "3-12", "ceph_docker_registry": "192.168.24.1:8787", "ceph_origin": "distro", "ceph_stable": true, "cluster": "ceph", "cluster_network": "172.17.4.0/24", "containerized_deployment": true, "docker": true, "fsid": "4398e5b0-c63c-11e8-b95a-525400c8bd81", "generate_fsid": false, "ip_version": "ipv4", "keys": [{"caps": {"mgr": "allow *", "mon": "profile rbd", "osd": "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"}, "key": "AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==", "mode": "0600", "name": "client.openstack"}, {"caps": {"mds": "allow *", "mgr": "allow *", "mon": "allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'", "osd": "allow rw"}, "key": "AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==", "mode": "0600", "name": "client.manila"}, {"caps": {"mgr": "allow *", "mon": "allow rw", "osd": "allow rwx"}, "key": "AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==", "mode": "0600", "name": "client.radosgw"}], "monitor_address_block": "172.17.3.0/24", "ntp_service_enabled": false, "openstack_config": true, "openstack_keys": [{"caps": {"mgr": "allow *", "mon": "profile rbd", "osd": "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"}, "key": "AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q==", "mode": "0600", "name": "client.openstack"}, {"caps": {"mds": "allow *", "mgr": "allow *", "mon": "allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'", "osd": "allow rw"}, "key": "AQBkYLNbAAAAABAAL4iwyQ6vA9lugUDtB5faig==", "mode": "0600", "name": "client.manila"}, {"caps": {"mgr": "allow *", "mon": "allow rw", "osd": "allow rwx"}, "key": "AQBkYLNbAAAAABAAiIi68YEgekOzpBkJSSiN4g==", "mode": "0600", "name": "client.radosgw"}], "openstack_pools": [{"application": "rbd", "name": "images", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "openstack_gnocchi", "name": "metrics", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "backups", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "vms", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "volumes", "pg_num": 32, "rule_name": "replicated_rule"}], "pools": [], "public_network": "172.17.3.0/24", "user_config": true}}, "changed": false} >2018-10-02 10:43:38,137 p=605 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-10-02 10:43:38,137 p=605 u=mistral | Tuesday 02 October 2018 10:43:38 -0400 (0:00:00.117) 0:04:08.410 ******* >2018-10-02 10:43:38,487 p=605 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "775f61717860734078461897b37b4d851c209251", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/all.yml", "gid": 42430, "group": "mistral", "md5sum": "9d8be5e3f5db661fbfb3d60cc8d4dd26", "mode": "0644", "owner": "mistral", "size": 3078, "src": "/tmp/ansible-/ansible-tmp-1538491418.19-181586834026378/source", "state": "file", "uid": 42430} >2018-10-02 10:43:38,502 p=605 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-10-02 10:43:38,502 p=605 u=mistral | Tuesday 02 October 2018 10:43:38 -0400 (0:00:00.365) 0:04:08.775 ******* >2018-10-02 10:43:38,538 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_extra_vars": {"fetch_directory": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "ireallymeanit": "yes"}}, "changed": false} >2018-10-02 10:43:38,553 p=605 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-10-02 10:43:38,553 p=605 u=mistral | Tuesday 02 October 2018 10:43:38 -0400 (0:00:00.050) 0:04:08.826 ******* >2018-10-02 10:43:38,868 p=605 u=mistral | ok: [undercloud] => {"changed": false, "checksum": "736efc435c358cb150f966050ebc3ab5061819cb", "dest": "/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml", "gid": 42430, "group": "mistral", "mode": "0644", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml", "size": 88, "state": "file", "uid": 42430} >2018-10-02 10:43:38,882 p=605 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-10-02 10:43:38,882 p=605 u=mistral | Tuesday 02 October 2018 10:43:38 -0400 (0:00:00.329) 0:04:09.155 ******* >2018-10-02 10:43:39,204 p=605 u=mistral | ok: [undercloud] => {"changed": false, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_data.json", "gid": 42430, "group": "mistral", "mode": "0644", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_data.json", "size": 2, "state": "file", "uid": 42430} >2018-10-02 10:43:39,217 p=605 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-10-02 10:43:39,218 p=605 u=mistral | Tuesday 02 October 2018 10:43:39 -0400 (0:00:00.335) 0:04:09.491 ******* >2018-10-02 10:43:39,604 p=605 u=mistral | ok: [undercloud] => {"changed": false, "checksum": "6295759c7c940d5f447c8f2aa21ca4b89c07424a", "dest": "/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_playbook.yml", "gid": 42430, "group": "mistral", "mode": "0644", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_playbook.yml", "size": 527, "state": "file", "uid": 42430} >2018-10-02 10:43:39,622 p=605 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-10-02 10:43:39,623 p=605 u=mistral | Tuesday 02 October 2018 10:43:39 -0400 (0:00:00.405) 0:04:09.896 ******* >2018-10-02 10:43:39,712 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:39,771 p=605 u=mistral | TASK [set ceph-ansible params from Heat] *************************************** >2018-10-02 10:43:39,771 p=605 u=mistral | Tuesday 02 October 2018 10:43:39 -0400 (0:00:00.148) 0:04:10.044 ******* >2018-10-02 10:43:39,791 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:39,806 p=605 u=mistral | TASK [set ceph-ansible playbooks] ********************************************** >2018-10-02 10:43:39,806 p=605 u=mistral | Tuesday 02 October 2018 10:43:39 -0400 (0:00:00.035) 0:04:10.079 ******* >2018-10-02 10:43:39,825 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:39,840 p=605 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-10-02 10:43:39,840 p=605 u=mistral | Tuesday 02 October 2018 10:43:39 -0400 (0:00:00.033) 0:04:10.113 ******* >2018-10-02 10:43:39,858 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:39,873 p=605 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-10-02 10:43:39,873 p=605 u=mistral | Tuesday 02 October 2018 10:43:39 -0400 (0:00:00.033) 0:04:10.147 ******* >2018-10-02 10:43:39,894 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:39,908 p=605 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-10-02 10:43:39,909 p=605 u=mistral | Tuesday 02 October 2018 10:43:39 -0400 (0:00:00.035) 0:04:10.182 ******* >2018-10-02 10:43:39,941 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mgrs": {"ceph_mgr_docker_extra_env": "-e MGR_DASHBOARD=0"}}, "changed": false} >2018-10-02 10:43:39,956 p=605 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-10-02 10:43:39,957 p=605 u=mistral | Tuesday 02 October 2018 10:43:39 -0400 (0:00:00.047) 0:04:10.230 ******* >2018-10-02 10:43:40,273 p=605 u=mistral | ok: [undercloud] => {"changed": false, "checksum": "06d130f3471f2ac09bb0161450878cf64bafd8af", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/mgrs.yml", "gid": 42430, "group": "mistral", "mode": "0644", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/mgrs.yml", "size": 46, "state": "file", "uid": 42430} >2018-10-02 10:43:40,289 p=605 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-10-02 10:43:40,289 p=605 u=mistral | Tuesday 02 October 2018 10:43:40 -0400 (0:00:00.332) 0:04:10.562 ******* >2018-10-02 10:43:40,327 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mons": {"admin_secret": "AQBkYLNbAAAAABAAZaj/1kV4/FOi0ZBEaxPL1g==", "monitor_secret": "AQBkYLNbAAAAABAAPtOjxXjymErzGNcQab4sRQ=="}}, "changed": false} >2018-10-02 10:43:40,343 p=605 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-10-02 10:43:40,343 p=605 u=mistral | Tuesday 02 October 2018 10:43:40 -0400 (0:00:00.053) 0:04:10.616 ******* >2018-10-02 10:43:40,667 p=605 u=mistral | ok: [undercloud] => {"changed": false, "checksum": "8902c5e22a09be21d37b1e5e2f4a9bfc88793ecd", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/mons.yml", "gid": 42430, "group": "mistral", "mode": "0644", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/mons.yml", "size": 112, "state": "file", "uid": 42430} >2018-10-02 10:43:40,685 p=605 u=mistral | TASK [set_fact] **************************************************************** >2018-10-02 10:43:40,685 p=605 u=mistral | Tuesday 02 October 2018 10:43:40 -0400 (0:00:00.342) 0:04:10.958 ******* >2018-10-02 10:43:40,720 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"log_file": "tripleo-container-image-prepare.log"}, "changed": false} >2018-10-02 10:43:40,736 p=605 u=mistral | TASK [Create temp file for prepare parameter] ********************************** >2018-10-02 10:43:40,736 p=605 u=mistral | Tuesday 02 October 2018 10:43:40 -0400 (0:00:00.051) 0:04:11.009 ******* >2018-10-02 10:43:41,007 p=605 u=mistral | changed: [undercloud] => {"changed": true, "gid": 42430, "group": "mistral", "mode": "0600", "owner": "mistral", "path": "/tmp/ansible.nM8iLo-prepare-param", "size": 0, "state": "file", "uid": 42430} >2018-10-02 10:43:41,023 p=605 u=mistral | TASK [Create temp file for role data] ****************************************** >2018-10-02 10:43:41,023 p=605 u=mistral | Tuesday 02 October 2018 10:43:41 -0400 (0:00:00.287) 0:04:11.297 ******* >2018-10-02 10:43:41,186 p=605 u=mistral | changed: [undercloud] => {"changed": true, "gid": 42430, "group": "mistral", "mode": "0600", "owner": "mistral", "path": "/tmp/ansible.g82zmd-role-data", "size": 0, "state": "file", "uid": 42430} >2018-10-02 10:43:41,200 p=605 u=mistral | TASK [Write ContainerImagePrepare parameter file] ****************************** >2018-10-02 10:43:41,200 p=605 u=mistral | Tuesday 02 October 2018 10:43:41 -0400 (0:00:00.176) 0:04:11.473 ******* >2018-10-02 10:43:41,533 p=605 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "ee4783631076c19990a802865b8c0a3c25baeba1", "dest": "/tmp/ansible.nM8iLo-prepare-param", "gid": 42430, "group": "mistral", "md5sum": "be85bccfbd1e18c6ab1a8370c364fe60", "mode": "0600", "owner": "mistral", "size": 11187, "src": "/tmp/ansible-/ansible-tmp-1538491421.24-252036490077705/source", "state": "file", "uid": 42430} >2018-10-02 10:43:41,549 p=605 u=mistral | TASK [Write role data file] **************************************************** >2018-10-02 10:43:41,549 p=605 u=mistral | Tuesday 02 October 2018 10:43:41 -0400 (0:00:00.348) 0:04:11.822 ******* >2018-10-02 10:43:41,880 p=605 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "f4bd6ad5174a88673a5da2c3b6c2de3827e06b7b", "dest": "/tmp/ansible.g82zmd-role-data", "gid": 42430, "group": "mistral", "md5sum": "d3ae9b59dea6998091971def17a31a6a", "mode": "0600", "owner": "mistral", "size": 13059, "src": "/tmp/ansible-/ansible-tmp-1538491421.58-131702927216368/source", "state": "file", "uid": 42430} >2018-10-02 10:43:41,893 p=605 u=mistral | TASK [Run tripleo-container-image-prepare] ************************************* >2018-10-02 10:43:41,894 p=605 u=mistral | Tuesday 02 October 2018 10:43:41 -0400 (0:00:00.344) 0:04:12.167 ******* >2018-10-02 10:43:43,660 p=605 u=mistral | [WARNING]: Consider using 'become', 'become_method', and 'become_user' rather >than running sudo > >2018-10-02 10:43:43,661 p=605 u=mistral | changed: [undercloud] => {"changed": true, "cmd": "sudo /usr/bin/tripleo-container-image-prepare --roles-file /tmp/ansible.g82zmd-role-data --environment-file /tmp/ansible.nM8iLo-prepare-param --cleanup partial 2> tripleo-container-image-prepare.log", "delta": "0:00:01.602124", "end": "2018-10-02 10:43:43.642496", "rc": 0, "start": "2018-10-02 10:43:42.040372", "stderr": "", "stderr_lines": [], "stdout": "null\n...", "stdout_lines": ["null", "..."]} >2018-10-02 10:43:43,676 p=605 u=mistral | TASK [Delete param file] ******************************************************* >2018-10-02 10:43:43,676 p=605 u=mistral | Tuesday 02 October 2018 10:43:43 -0400 (0:00:01.782) 0:04:13.950 ******* >2018-10-02 10:43:43,851 p=605 u=mistral | changed: [undercloud] => {"changed": true, "path": "/tmp/ansible.nM8iLo-prepare-param", "state": "absent"} >2018-10-02 10:43:43,866 p=605 u=mistral | TASK [Delete role file] ******************************************************** >2018-10-02 10:43:43,867 p=605 u=mistral | Tuesday 02 October 2018 10:43:43 -0400 (0:00:00.190) 0:04:14.140 ******* >2018-10-02 10:43:44,034 p=605 u=mistral | changed: [undercloud] => {"changed": true, "path": "/tmp/ansible.g82zmd-role-data", "state": "absent"} >2018-10-02 10:43:44,050 p=605 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-10-02 10:43:44,050 p=605 u=mistral | Tuesday 02 October 2018 10:43:44 -0400 (0:00:00.183) 0:04:14.324 ******* >2018-10-02 10:43:44,088 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_clients": {}}, "changed": false} >2018-10-02 10:43:44,103 p=605 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-10-02 10:43:44,103 p=605 u=mistral | Tuesday 02 October 2018 10:43:44 -0400 (0:00:00.052) 0:04:14.376 ******* >2018-10-02 10:43:44,439 p=605 u=mistral | ok: [undercloud] => {"changed": false, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/clients.yml", "gid": 42430, "group": "mistral", "mode": "0644", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/clients.yml", "size": 2, "state": "file", "uid": 42430} >2018-10-02 10:43:44,455 p=605 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-10-02 10:43:44,455 p=605 u=mistral | Tuesday 02 October 2018 10:43:44 -0400 (0:00:00.352) 0:04:14.728 ******* >2018-10-02 10:43:44,500 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_osds": {"devices": ["/dev/vdb", "/dev/vdc", "/dev/vdd", "/dev/vde", "/dev/vdf"], "journal_size": 512, "osd_objectstore": "filestore", "osd_scenario": "collocated"}}, "changed": false} >2018-10-02 10:43:44,517 p=605 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-10-02 10:43:44,517 p=605 u=mistral | Tuesday 02 October 2018 10:43:44 -0400 (0:00:00.061) 0:04:14.790 ******* >2018-10-02 10:43:44,842 p=605 u=mistral | ok: [undercloud] => {"changed": false, "checksum": "a209fd8d503be2b45dc87935a930c08a563088cb", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/osds.yml", "gid": 42430, "group": "mistral", "mode": "0644", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/osds.yml", "size": 134, "state": "file", "uid": 42430} >2018-10-02 10:43:44,849 p=605 u=mistral | PLAY [Overcloud deploy step tasks for 1] *************************************** >2018-10-02 10:43:44,858 p=605 u=mistral | PLAY [Overcloud common deploy step tasks 1] ************************************ >2018-10-02 10:43:44,896 p=605 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-10-02 10:43:44,896 p=605 u=mistral | Tuesday 02 October 2018 10:43:44 -0400 (0:00:00.378) 0:04:15.169 ******* >2018-10-02 10:43:45,117 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:45,151 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:45,167 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:45,195 p=605 u=mistral | TASK [Delete existing /var/lib/tripleo-config/check-mode directory for check mode] *** >2018-10-02 10:43:45,195 p=605 u=mistral | Tuesday 02 October 2018 10:43:45 -0400 (0:00:00.299) 0:04:15.468 ******* >2018-10-02 10:43:45,229 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:45,261 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:45,275 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:45,302 p=605 u=mistral | TASK [Create /var/lib/tripleo-config/check-mode directory for check mode] ****** >2018-10-02 10:43:45,302 p=605 u=mistral | Tuesday 02 October 2018 10:43:45 -0400 (0:00:00.107) 0:04:15.575 ******* >2018-10-02 10:43:45,337 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:45,368 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:45,388 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:45,419 p=605 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-10-02 10:43:45,419 p=605 u=mistral | Tuesday 02 October 2018 10:43:45 -0400 (0:00:00.116) 0:04:15.692 ******* >2018-10-02 10:43:45,977 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "8cc2a8154fe8261f1ad4dbbf7151db6f5d016a04", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "ea4a5c9cd9eca53a460514b61dc3d011", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1631, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491425.47-7909859958076/source", "state": "file", "uid": 0} >2018-10-02 10:43:46,000 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "44355f328588ff032fb9d91a3fdf2a8f427f6ac1", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "d14bfa59823532755440579b4b515901", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1589, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491425.54-232610046443591/source", "state": "file", "uid": 0} >2018-10-02 10:43:46,019 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "0b7508ea11b5540c4e639bbb30162d8fa1fc1cc5", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "43135571b1950c38bbce98ace30272ac", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1641, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491425.5-154114364978597/source", "state": "file", "uid": 0} >2018-10-02 10:43:46,048 p=605 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 10:43:46,048 p=605 u=mistral | Tuesday 02 October 2018 10:43:46 -0400 (0:00:00.628) 0:04:16.321 ******* >2018-10-02 10:43:46,081 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:46,110 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:46,125 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:46,147 p=605 u=mistral | TASK [Diff puppet step_config manifest changes for check mode] ***************** >2018-10-02 10:43:46,147 p=605 u=mistral | Tuesday 02 October 2018 10:43:46 -0400 (0:00:00.099) 0:04:16.420 ******* >2018-10-02 10:43:46,174 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:43:46,203 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:43:46,218 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:43:46,239 p=605 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-10-02 10:43:46,239 p=605 u=mistral | Tuesday 02 October 2018 10:43:46 -0400 (0:00:00.092) 0:04:16.513 ******* >2018-10-02 10:43:46,462 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-10-02 10:43:46,484 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-10-02 10:43:46,497 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-10-02 10:43:46,525 p=605 u=mistral | TASK [Delete existing /var/lib/docker-puppet/check-mode for check mode] ******** >2018-10-02 10:43:46,525 p=605 u=mistral | Tuesday 02 October 2018 10:43:46 -0400 (0:00:00.285) 0:04:16.798 ******* >2018-10-02 10:43:46,557 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:46,588 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:46,602 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:46,629 p=605 u=mistral | TASK [Create /var/lib/docker-puppet/check-mode for check mode] ***************** >2018-10-02 10:43:46,629 p=605 u=mistral | Tuesday 02 October 2018 10:43:46 -0400 (0:00:00.103) 0:04:16.902 ******* >2018-10-02 10:43:46,663 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:46,694 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:46,709 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:46,736 p=605 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-10-02 10:43:46,736 p=605 u=mistral | Tuesday 02 October 2018 10:43:46 -0400 (0:00:00.106) 0:04:17.009 ******* >2018-10-02 10:43:47,312 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "b3363e1a751a8a08f70b1cdcdb25fb401ca3ae14", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "2663d832240304f41aa83aa686212527", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 309, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491426.85-203143439619624/source", "state": "file", "uid": 0} >2018-10-02 10:43:47,334 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "c89e1b9f795e5727c7e181b2184927fb1c907aaa", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "ec6efda3b5bbb102a9ef3288d38138e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 15684, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491426.84-134328235306081/source", "state": "file", "uid": 0} >2018-10-02 10:43:47,365 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "84cd01e8e56b134f3242d2b61c139ff7cb5c4499", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "5e0fce94ac17c7c8e1e04aea47eca983", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2777, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491426.83-77329633037910/source", "state": "file", "uid": 0} >2018-10-02 10:43:47,393 p=605 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 10:43:47,393 p=605 u=mistral | Tuesday 02 October 2018 10:43:47 -0400 (0:00:00.657) 0:04:17.667 ******* >2018-10-02 10:43:47,428 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:47,460 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:47,481 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:43:47,510 p=605 u=mistral | TASK [Diff docker-puppet.json changes for check mode] ************************** >2018-10-02 10:43:47,510 p=605 u=mistral | Tuesday 02 October 2018 10:43:47 -0400 (0:00:00.116) 0:04:17.783 ******* >2018-10-02 10:43:47,544 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:43:47,577 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:43:47,591 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:43:47,618 p=605 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-10-02 10:43:47,618 p=605 u=mistral | Tuesday 02 October 2018 10:43:47 -0400 (0:00:00.108) 0:04:17.891 ******* >2018-10-02 10:43:47,834 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:47,864 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:47,892 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:47,919 p=605 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-10-02 10:43:47,920 p=605 u=mistral | Tuesday 02 October 2018 10:43:47 -0400 (0:00:00.301) 0:04:18.193 ******* >2018-10-02 10:43:48,134 p=605 u=mistral | ok: [controller-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-10-02 10:43:48,165 p=605 u=mistral | ok: [compute-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-10-02 10:43:48,194 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-10-02 10:43:48,221 p=605 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-10-02 10:43:48,221 p=605 u=mistral | Tuesday 02 October 2018 10:43:48 -0400 (0:00:00.301) 0:04:18.494 ******* >2018-10-02 10:43:48,827 p=605 u=mistral | changed: [controller-0] => (item=create_swift_secret.sh) => {"changed": true, "checksum": "e77b96beec241bb84928d298a778521376225c0d", "dest": "/var/lib/docker-config-scripts/create_swift_secret.sh", "gid": 0, "group": "root", "item": ["create_swift_secret.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}], "md5sum": "9277d70c2fd62961998c5fce0a8aeee2", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1125, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491428.33-135215117869162/source", "state": "file", "uid": 0} >2018-10-02 10:43:48,830 p=605 u=mistral | changed: [compute-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": true, "checksum": "72a319c9e7cf5c1343a0c92282d91569626d2bc2", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "md5sum": "48f516886d4b7523fff55b054d1b0457", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 599, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491428.35-140033764318695/source", "state": "file", "uid": 0} >2018-10-02 10:43:49,344 p=605 u=mistral | changed: [compute-0] => (item=nova_statedir_ownership.py) => {"changed": true, "checksum": "052884875dafcd3e79ee18bebaed25f6994a1c37", "dest": "/var/lib/docker-config-scripts/nova_statedir_ownership.py", "gid": 0, "group": "root", "item": ["nova_statedir_ownership.py", {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}], "md5sum": "c8d51232f071c7b1fef053299a1b66c0", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6075, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491428.86-100174967824269/source", "state": "file", "uid": 0} >2018-10-02 10:43:49,346 p=605 u=mistral | changed: [controller-0] => (item=docker_puppet_apply.sh) => {"changed": true, "checksum": "93afaa6df42c9ead7768b295fa901f83ae1b39ef", "dest": "/var/lib/docker-config-scripts/docker_puppet_apply.sh", "gid": 0, "group": "root", "item": ["docker_puppet_apply.sh", {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}], "md5sum": "709b2caef95cc7486f9b851414e71133", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 653, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491428.86-109341223007483/source", "state": "file", "uid": 0} >2018-10-02 10:43:49,842 p=605 u=mistral | changed: [controller-0] => (item=neutron_ovs_agent_launcher.sh) => {"changed": true, "checksum": "72a319c9e7cf5c1343a0c92282d91569626d2bc2", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": ["neutron_ovs_agent_launcher.sh", {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}], "md5sum": "48f516886d4b7523fff55b054d1b0457", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 599, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491429.37-70952807311270/source", "state": "file", "uid": 0} >2018-10-02 10:43:50,329 p=605 u=mistral | changed: [controller-0] => (item=nova_api_discover_hosts.sh) => {"changed": true, "checksum": "4e350e3d48cba294f2ccab34eb03c1dee23e7f82", "dest": "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh", "gid": 0, "group": "root", "item": ["nova_api_discover_hosts.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}], "md5sum": "ed5dca102b28b4f992943612dee2dced", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491429.87-3905397300911/source", "state": "file", "uid": 0} >2018-10-02 10:43:50,795 p=605 u=mistral | changed: [controller-0] => (item=nova_api_ensure_default_cell.sh) => {"changed": true, "checksum": "0a839197c2fa15204014befb1c771a17aea5bdd1", "dest": "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh", "gid": 0, "group": "root", "item": ["nova_api_ensure_default_cell.sh", {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}], "md5sum": "12a4a82656ddaae342942097b752d9db", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 442, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491430.36-17064287430089/source", "state": "file", "uid": 0} >2018-10-02 10:43:51,275 p=605 u=mistral | changed: [controller-0] => (item=set_swift_keymaster_key_id.sh) => {"changed": true, "checksum": "9c2474fa6e4a8869674b689206eb1a1658a28fc6", "dest": "/var/lib/docker-config-scripts/set_swift_keymaster_key_id.sh", "gid": 0, "group": "root", "item": ["set_swift_keymaster_key_id.sh", {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}], "md5sum": "054225f8957e4457ef2103ce24d44b04", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1275, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491430.83-276779644550177/source", "state": "file", "uid": 0} >2018-10-02 10:43:51,310 p=605 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-10-02 10:43:51,311 p=605 u=mistral | Tuesday 02 October 2018 10:43:51 -0400 (0:00:03.089) 0:04:21.584 ******* >2018-10-02 10:43:51,445 p=605 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,457 p=605 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,467 p=605 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,477 p=605 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,478 p=605 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,488 p=605 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,489 p=605 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,498 p=605 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,500 p=605 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,501 p=605 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,509 p=605 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,519 p=605 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,530 p=605 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,531 p=605 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,532 p=605 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,533 p=605 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,540 p=605 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,552 p=605 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,564 p=605 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,575 p=605 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,575 p=605 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,605 p=605 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-10-02 10:43:51,605 p=605 u=mistral | Tuesday 02 October 2018 10:43:51 -0400 (0:00:00.293) 0:04:21.878 ******* >2018-10-02 10:43:51,789 p=605 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:51,813 p=605 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:52,307 p=605 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:43:52,333 p=605 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-10-02 10:43:52,334 p=605 u=mistral | Tuesday 02 October 2018 10:43:52 -0400 (0:00:00.728) 0:04:22.607 ******* >2018-10-02 10:43:52,896 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "b3a14634bd5ac7bb56ac446c9c56588b491de4dd", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "41909313044db461888a0cbb954c17af", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 152634, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491432.39-268142080387061/source", "state": "file", "uid": 0} >2018-10-02 10:43:52,963 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "47e6b90cf133abcc759ebca645ccd7f04261545f", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "c46a2b4f258d2872a490e4259933884f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 17511, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491432.41-69651911620939/source", "state": "file", "uid": 0} >2018-10-02 10:43:52,984 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "e18ade306ec73767bc37d9997f5a6c043e08ae9a", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "13ae8ed298be30c0b0e40e4f4956b7e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1477, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491432.45-210576112338731/source", "state": "file", "uid": 0} >2018-10-02 10:43:53,015 p=605 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-10-02 10:43:53,015 p=605 u=mistral | Tuesday 02 October 2018 10:43:53 -0400 (0:00:00.681) 0:04:23.288 ******* >2018-10-02 10:43:53,657 p=605 u=mistral | changed: [compute-0] => (item=step_1) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": ["step_1", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491433.11-6733350020246/source", "state": "file", "uid": 0} >2018-10-02 10:43:53,670 p=605 u=mistral | changed: [ceph-0] => (item=step_1) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": ["step_1", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491433.12-213009326306801/source", "state": "file", "uid": 0} >2018-10-02 10:43:53,675 p=605 u=mistral | changed: [controller-0] => (item=step_1) => {"changed": true, "checksum": "bc58a399137e67c680429c5a172a695049bc5ee4", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": ["step_1", {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=mG0FjSjrDN8mWwf9YJSsEJGuQ", "DB_ROOT_PASSWORD=5BSzxzKG9a"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=fbxKGjRmnA14UIbGdAmW"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}], "md5sum": "6254b603ec9b76635de5e7cc8ec526e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9190, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491433.11-108972649463951/source", "state": "file", "uid": 0} >2018-10-02 10:43:54,172 p=605 u=mistral | changed: [ceph-0] => (item=step_2) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": ["step_2", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491433.68-270965260644516/source", "state": "file", "uid": 0} >2018-10-02 10:43:54,183 p=605 u=mistral | changed: [compute-0] => (item=step_2) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": ["step_2", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491433.66-139507883757937/source", "state": "file", "uid": 0} >2018-10-02 10:43:54,192 p=605 u=mistral | changed: [controller-0] => (item=step_2) => {"changed": true, "checksum": "d436923b6bf668ccc71adce25b4af374ff3047f1", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": ["step_2", {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538490348"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538490348"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538490348"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538490348"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}], "md5sum": "c6a23f28975623ff64c4c5158ea30298", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 22855, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491433.68-5502505295245/source", "state": "file", "uid": 0} >2018-10-02 10:43:54,667 p=605 u=mistral | changed: [ceph-0] => (item=step_3) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": ["step_3", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491434.18-47840175344039/source", "state": "file", "uid": 0} >2018-10-02 10:43:54,723 p=605 u=mistral | changed: [compute-0] => (item=step_3) => {"changed": true, "checksum": "4a96db65f846bf6c09dc1fcb89bb5bad098ff3e1", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": ["step_3", {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}], "md5sum": "c685d2413ae9ceb911d50139c1a8d8d1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 7208, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491434.19-173005305944529/source", "state": "file", "uid": 0} >2018-10-02 10:43:54,739 p=605 u=mistral | changed: [controller-0] => (item=step_3) => {"changed": true, "checksum": "25119311f9f4f1f313da1a7026c1ade80dd8da11", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": ["step_3", {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "Q4TKZfrksKpvC1QXOQA8ciL7S"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "user": "root", "volumes": ["/srv/node:/srv/node"]}}], "md5sum": "defa48e175322c689a940b9467902b34", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 29101, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491434.2-64815637010239/source", "state": "file", "uid": 0} >2018-10-02 10:43:55,143 p=605 u=mistral | changed: [ceph-0] => (item=step_4) => {"changed": true, "checksum": "b4026aa009bb07e185a7d24fc6ae29313522e7ca", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": ["step_4", {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}], "md5sum": "c25ae9212c604d8902701f31742ce214", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491434.68-172723421253423/source", "state": "file", "uid": 0} >2018-10-02 10:43:55,226 p=605 u=mistral | changed: [compute-0] => (item=step_4) => {"changed": true, "checksum": "c9336e49f241d8245859d2d8a7a89600524b4bab", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": ["step_4", {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '4398e5b0-c63c-11e8-b95a-525400c8bd81' --base64 'AQBkYLNbAAAAABAAZ3+3bk/SmO/g+JlYvBX41Q=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-26.1", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}], "md5sum": "60ae00a8c7bd0b5d87a2eef258c54629", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8816, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491434.73-63512453912436/source", "state": "file", "uid": 0} >2018-10-02 10:43:55,294 p=605 u=mistral | changed: [controller-0] => (item=step_4) => {"changed": true, "checksum": "f0ef7eafc400be0c7bef94d894bcdc91c90f877d", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": ["step_4", {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-26.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-26.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}], "md5sum": "e09b84d266c6504660080a51f2197cb8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 60195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491434.75-197754493751418/source", "state": "file", "uid": 0} >2018-10-02 10:43:55,639 p=605 u=mistral | changed: [ceph-0] => (item=step_5) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": ["step_5", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491435.15-50980910758969/source", "state": "file", "uid": 0} >2018-10-02 10:43:55,762 p=605 u=mistral | changed: [compute-0] => (item=step_5) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": ["step_5", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491435.23-262215140701105/source", "state": "file", "uid": 0} >2018-10-02 10:43:55,842 p=605 u=mistral | changed: [controller-0] => (item=step_5) => {"changed": true, "checksum": "dd01e2817906dede161fee5c6e73b9b963fd59ef", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": ["step_5", {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_api_online_migrations": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db online_data_migrations'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538490348"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538490348"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-26.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1538490348"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}, "nova_online_migrations": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db online_data_migrations'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}}], "md5sum": "e86fa4e206782ba928d422cfe827ab46", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19124, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491435.3-81264193054476/source", "state": "file", "uid": 0} >2018-10-02 10:43:56,132 p=605 u=mistral | changed: [ceph-0] => (item=step_6) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": ["step_6", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491435.65-170519545078827/source", "state": "file", "uid": 0} >2018-10-02 10:43:56,290 p=605 u=mistral | changed: [compute-0] => (item=step_6) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": ["step_6", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491435.77-250243545835257/source", "state": "file", "uid": 0} >2018-10-02 10:43:56,360 p=605 u=mistral | changed: [controller-0] => (item=step_6) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": ["step_6", {}], "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491435.85-73653560288078/source", "state": "file", "uid": 0} >2018-10-02 10:43:56,404 p=605 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-10-02 10:43:56,404 p=605 u=mistral | Tuesday 02 October 2018 10:43:56 -0400 (0:00:03.389) 0:04:26.677 ******* >2018-10-02 10:43:56,630 p=605 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:56,655 p=605 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:56,684 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-10-02 10:43:56,712 p=605 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-10-02 10:43:56,712 p=605 u=mistral | Tuesday 02 October 2018 10:43:56 -0400 (0:00:00.308) 0:04:26.986 ******* >2018-10-02 10:43:57,313 p=605 u=mistral | changed: [ceph-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": true, "checksum": "e05e847d3096659560f83aa3fcb0ef1d15168e8e", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "6a997b9e6deb0e043397bf22a50004d4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491436.84-83228240486451/source", "state": "file", "uid": 0} >2018-10-02 10:43:57,356 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_compute.json) => {"changed": true, "checksum": "76874c2f28ef848007e675a4b52d67ff252c4cf1", "dest": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/ceilometer_agent_compute.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "4a3ce71cb7b5b699dcbd2ca937e5ea7c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 323, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491436.82-150700328511396/source", "state": "file", "uid": 0} >2018-10-02 10:43:57,486 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/aodh_api.json) => {"changed": true, "checksum": "7eddb177fe0e9635a939871db86a4cef04690de6", "dest": "/var/lib/kolla/config_files/aodh_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/aodh_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "md5sum": "3cd09d6b656982376119207e483b6aee", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 403, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491436.96-168950662826954/source", "state": "file", "uid": 0} >2018-10-02 10:43:57,893 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": true, "checksum": "d310c205955d0f5d508329bf624cbe8345535c34", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "md5sum": "22ef322b4a91ebca32ec0dd9c41be102", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491437.37-72467806289291/source", "state": "file", "uid": 0} >2018-10-02 10:43:58,018 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/aodh_evaluator.json) => {"changed": true, "checksum": "01aea38e8d76afa53499dc261de8b66faadc5ff8", "dest": "/var/lib/kolla/config_files/aodh_evaluator.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/aodh_evaluator.json", {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "md5sum": "b4dfbf9ca1823ec2828eb3c2b4dc6126", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 398, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491437.5-183834149821234/source", "state": "file", "uid": 0} >2018-10-02 10:43:58,430 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": true, "checksum": "e05e847d3096659560f83aa3fcb0ef1d15168e8e", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "6a997b9e6deb0e043397bf22a50004d4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491437.9-12969792368511/source", "state": "file", "uid": 0} >2018-10-02 10:43:58,513 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/aodh_listener.json) => {"changed": true, "checksum": "f1bb3c5d81fed87f945e29bbb59dbc822fe154ec", "dest": "/var/lib/kolla/config_files/aodh_listener.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/aodh_listener.json", {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "md5sum": "165c9900e4df3de03c25903072139acf", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491438.03-1786140779350/source", "state": "file", "uid": 0} >2018-10-02 10:43:58,985 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": true, "checksum": "297543dc37af33605befea77ef4a371f0a6a3662", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "md5sum": "51a8878fe08bb182bee7ac73da2e17d3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 414, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491438.44-130878642911406/source", "state": "file", "uid": 0} >2018-10-02 10:43:59,018 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/aodh_notifier.json) => {"changed": true, "checksum": "3524989e2f062b628ff39bfa1826a299e9e87643", "dest": "/var/lib/kolla/config_files/aodh_notifier.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/aodh_notifier.json", {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}], "md5sum": "72ee81994099352750394944a4944691", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491438.52-268326055257155/source", "state": "file", "uid": 0} >2018-10-02 10:43:59,551 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/nova-migration-target.json) => {"changed": true, "checksum": "5ebfe90d3d5db802ffc11e62806a1c471e899f42", "dest": "/var/lib/kolla/config_files/nova-migration-target.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova-migration-target.json", {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}], "md5sum": "3a6d1baa3e960be9487b87e96286b82f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 414, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491439.0-217735908294096/source", "state": "file", "uid": 0} >2018-10-02 10:43:59,552 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_central.json) => {"changed": true, "checksum": "33088791c573ef63b952f0f1fde999b995c207f2", "dest": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/ceilometer_agent_central.json", {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "225cd56e124ed8119b457e8966d0f1e5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 323, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491439.03-22896317518717/source", "state": "file", "uid": 0} >2018-10-02 10:44:00,067 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/ceilometer_agent_notification.json) => {"changed": true, "checksum": "60eec3e718b294ae05e52da14f2db42a06fb93a9", "dest": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/ceilometer_agent_notification.json", {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}], "md5sum": "2f01e419ebdad98b2d5e49b94c8c980e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491439.56-264673010052161/source", "state": "file", "uid": 0} >2018-10-02 10:44:00,070 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/nova_compute.json) => {"changed": true, "checksum": "6afeee3c19010437bf1ccc38749ac6c0b96cc70a", "dest": "/var/lib/kolla/config_files/nova_compute.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_compute.json", {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "430d079a841e830bd7f78bb526583b96", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 927, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491439.56-92178610690950/source", "state": "file", "uid": 0} >2018-10-02 10:44:00,574 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api.json) => {"changed": true, "checksum": "74ef43c5be2146af6ac8aec7c636329654b98cb4", "dest": "/var/lib/kolla/config_files/cinder_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/cinder_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "md5sum": "edb991e706ddfdf46e5953dd9dd50f20", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 409, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491440.08-247426625041459/source", "state": "file", "uid": 0} >2018-10-02 10:44:00,588 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/nova_libvirt.json) => {"changed": true, "checksum": "65ab6d1486d27536bef71d729d69d5a4e1ed39cc", "dest": "/var/lib/kolla/config_files/nova_libvirt.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_libvirt.json", {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "122a85849fd7331643c266d1c06aa44e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 818, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491440.08-185622491092665/source", "state": "file", "uid": 0} >2018-10-02 10:44:01,070 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/cinder_api_cron.json) => {"changed": true, "checksum": "cf9eab2e83b0ed617d39b36638b9dbbaed31f675", "dest": "/var/lib/kolla/config_files/cinder_api_cron.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/cinder_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "md5sum": "4c96926f14f7c02894093b15f77f66ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 399, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491440.58-48725814299903/source", "state": "file", "uid": 0} >2018-10-02 10:44:01,089 p=605 u=mistral | changed: [compute-0] => (item=/var/lib/kolla/config_files/nova_virtlogd.json) => {"changed": true, "checksum": "75ebc27be03214be0291f0ed5776b9d9c05b1773", "dest": "/var/lib/kolla/config_files/nova_virtlogd.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_virtlogd.json", {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "1971e50723b046a7c66a1ecc7635dc67", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 279, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491440.6-128352494156094/source", "state": "file", "uid": 0} >2018-10-02 10:44:01,559 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/cinder_backup.json) => {"changed": true, "checksum": "fc28ba7bb64dda776da4fb6b65ab4cce58c55043", "dest": "/var/lib/kolla/config_files/cinder_backup.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/cinder_backup.json", {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "md5sum": "9dc8348aa5d9c1399e5ec9b9a8bf39a5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1001, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491441.08-107189762568709/source", "state": "file", "uid": 0} >2018-10-02 10:44:02,034 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/cinder_scheduler.json) => {"changed": true, "checksum": "8247ea37983cee31da341830b5a7351da4f55bb6", "dest": "/var/lib/kolla/config_files/cinder_scheduler.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/cinder_scheduler.json", {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "md5sum": "4d54f110f3905ea3ab1eeca28c8a20f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 493, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491441.57-111189064082437/source", "state": "file", "uid": 0} >2018-10-02 10:44:02,512 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/cinder_volume.json) => {"changed": true, "checksum": "9a45013e6489f8e1a4b26ce2bac479740b72a291", "dest": "/var/lib/kolla/config_files/cinder_volume.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/cinder_volume.json", {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}], "md5sum": "9087654f4a760bf3dc681aaa4ac80b46", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 872, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491442.05-115342114720272/source", "state": "file", "uid": 0} >2018-10-02 10:44:03,006 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/clustercheck.json) => {"changed": true, "checksum": "498341e7e5d08339f5a407a871691f38aeb88160", "dest": "/var/lib/kolla/config_files/clustercheck.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/clustercheck.json", {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "575af5380cd86d03642aec48e0b09839", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 251, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491442.52-260131448859193/source", "state": "file", "uid": 0} >2018-10-02 10:44:03,499 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/glance_api.json) => {"changed": true, "checksum": "1b1c2ce62e71e24ba6e806ac1fa0a25f9bac02bc", "dest": "/var/lib/kolla/config_files/glance_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/glance_api.json", {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "d84278d9f994dc2e8aeae1544bf1ff9e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 836, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491443.02-61220723105400/source", "state": "file", "uid": 0} >2018-10-02 10:44:03,976 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/glance_api_tls_proxy.json) => {"changed": true, "checksum": "20bba94ac1ce7afb7fd0793567a9fe48300d1a15", "dest": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/glance_api_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "bda59eb8d2adeb0f47b803f83819cb93", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491443.51-62733533883183/source", "state": "file", "uid": 0} >2018-10-02 10:44:04,446 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_api.json) => {"changed": true, "checksum": "398476f1850153ccbdec3645eb518301076734d3", "dest": "/var/lib/kolla/config_files/gnocchi_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/gnocchi_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "46345cb5377a2113e5df6f3a55609501", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 755, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491443.99-88959047913474/source", "state": "file", "uid": 0} >2018-10-02 10:44:04,917 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_db_sync.json) => {"changed": true, "checksum": "6d8f6ad47b0adea396ec88bc87650c3b37f95b29", "dest": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/gnocchi_db_sync.json", {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "7cfa51bbfe45f59f2687e86208b2cd32", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 811, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491444.45-52921550323604/source", "state": "file", "uid": 0} >2018-10-02 10:44:05,384 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_metricd.json) => {"changed": true, "checksum": "2c277290410059b85904555394495ff85e713585", "dest": "/var/lib/kolla/config_files/gnocchi_metricd.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/gnocchi_metricd.json", {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "a0d142d2edc479ff4164aa5a354d45c2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 751, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491444.93-66662404845855/source", "state": "file", "uid": 0} >2018-10-02 10:44:05,861 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/gnocchi_statsd.json) => {"changed": true, "checksum": "bc9cdf4be4f10268a8921bf7f955044bca40a6d7", "dest": "/var/lib/kolla/config_files/gnocchi_statsd.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/gnocchi_statsd.json", {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}], "md5sum": "ed7d62ee2974152d4f7ad928ecffffa3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 750, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491445.39-21787750506517/source", "state": "file", "uid": 0} >2018-10-02 10:44:06,344 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/haproxy.json) => {"changed": true, "checksum": "9a4b9d1d7f16f7bf07f22ea58e51305a17651991", "dest": "/var/lib/kolla/config_files/haproxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/haproxy.json", {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}], "md5sum": "df253cc0124ec5e92113b43d8f45a1bd", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1037, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491445.87-183737408077509/source", "state": "file", "uid": 0} >2018-10-02 10:44:06,800 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/heat_api.json) => {"changed": true, "checksum": "d8ba895b605f2f569f938611610bd87d4c0c1843", "dest": "/var/lib/kolla/config_files/heat_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/heat_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "md5sum": "73c8da5dcb124ae745f0dafdecb759fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 403, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491446.35-243821219325891/source", "state": "file", "uid": 0} >2018-10-02 10:44:07,276 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cfn.json) => {"changed": true, "checksum": "d8ba895b605f2f569f938611610bd87d4c0c1843", "dest": "/var/lib/kolla/config_files/heat_api_cfn.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/heat_api_cfn.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "md5sum": "73c8da5dcb124ae745f0dafdecb759fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 403, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491446.81-46416025716496/source", "state": "file", "uid": 0} >2018-10-02 10:44:07,766 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/heat_api_cron.json) => {"changed": true, "checksum": "3094b61a55d29dfe193b697638c9a1225a2eab4b", "dest": "/var/lib/kolla/config_files/heat_api_cron.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/heat_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "md5sum": "b872ad178d48140e84acf295deb896b1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 393, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491447.29-269148631622517/source", "state": "file", "uid": 0} >2018-10-02 10:44:08,262 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/heat_engine.json) => {"changed": true, "checksum": "da38d4d29e5f3b6754fd147b5e4ce08867367b4f", "dest": "/var/lib/kolla/config_files/heat_engine.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/heat_engine.json", {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}], "md5sum": "d9d073e1d28f19dae913d1198a17461b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 475, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491447.78-192982931894925/source", "state": "file", "uid": 0} >2018-10-02 10:44:08,748 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/horizon.json) => {"changed": true, "checksum": "10b4664bce96ab9dbf9a249322506726643d22b9", "dest": "/var/lib/kolla/config_files/horizon.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/horizon.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}], "md5sum": "50f5bff449ad137aa0772554002a49fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 911, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491448.27-193904136032151/source", "state": "file", "uid": 0} >2018-10-02 10:44:09,232 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/iscsid.json) => {"changed": true, "checksum": "d310c205955d0f5d508329bf624cbe8345535c34", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/iscsid.json", {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}], "md5sum": "22ef322b4a91ebca32ec0dd9c41be102", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491448.76-99392023060097/source", "state": "file", "uid": 0} >2018-10-02 10:44:09,699 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/keystone.json) => {"changed": true, "checksum": "20bba94ac1ce7afb7fd0793567a9fe48300d1a15", "dest": "/var/lib/kolla/config_files/keystone.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/keystone.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "bda59eb8d2adeb0f47b803f83819cb93", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491449.24-222374757017054/source", "state": "file", "uid": 0} >2018-10-02 10:44:10,169 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/keystone_cron.json) => {"changed": true, "checksum": "d445d71ded9217fe930e649813e1dcf19f36271a", "dest": "/var/lib/kolla/config_files/keystone_cron.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/keystone_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}], "md5sum": "aebc5c71b140992f2e480d6f98cf0957", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 405, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491449.71-261734268738023/source", "state": "file", "uid": 0} >2018-10-02 10:44:10,618 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/logrotate-crond.json) => {"changed": true, "checksum": "e05e847d3096659560f83aa3fcb0ef1d15168e8e", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/logrotate-crond.json", {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "6a997b9e6deb0e043397bf22a50004d4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491450.18-260265451285539/source", "state": "file", "uid": 0} >2018-10-02 10:44:11,085 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/mysql.json) => {"changed": true, "checksum": "16d384bc3e0d8580a0d746eedecd5375f23ba9f6", "dest": "/var/lib/kolla/config_files/mysql.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/mysql.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}], "md5sum": "3c91849f4fcf4c2188667e6ed5db2a57", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1133, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491450.63-257335687249181/source", "state": "file", "uid": 0} >2018-10-02 10:44:11,569 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_api.json) => {"changed": true, "checksum": "72ccad463ca9cf6403c76cb32ab9a2a7b929d0ac", "dest": "/var/lib/kolla/config_files/neutron_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_api.json", {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "md5sum": "ec071a9599838390074469cf52d6616b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 702, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491451.09-247349137345241/source", "state": "file", "uid": 0} >2018-10-02 10:44:12,053 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_dhcp.json) => {"changed": true, "checksum": "058d5a1972085dcd7cdadcaa416c9cbb2382cda2", "dest": "/var/lib/kolla/config_files/neutron_dhcp.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_dhcp.json", {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}], "md5sum": "3d55d72eb7fff4e8f3754ee9770b22d2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491451.58-23699118472283/source", "state": "file", "uid": 0} >2018-10-02 10:44:12,526 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_l3_agent.json) => {"changed": true, "checksum": "f5ffbfdade14575cf8c53d18447e2b2b9c59cac7", "dest": "/var/lib/kolla/config_files/neutron_l3_agent.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_l3_agent.json", {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "md5sum": "e159dec67bbce60437c2ee885efa6b27", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 844, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491452.06-117200133380468/source", "state": "file", "uid": 0} >2018-10-02 10:44:12,966 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_metadata_agent.json) => {"changed": true, "checksum": "cd52f696acdcff22cd6714ce45a850b21eab4d9e", "dest": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_metadata_agent.json", {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}], "md5sum": "f5e7ef39070696edf4df9ac35bb4aa35", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 827, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491452.53-94362615644459/source", "state": "file", "uid": 0} >2018-10-02 10:44:13,389 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_ovs_agent.json) => {"changed": true, "checksum": "297543dc37af33605befea77ef4a371f0a6a3662", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_ovs_agent.json", {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}], "md5sum": "51a8878fe08bb182bee7ac73da2e17d3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 414, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491452.97-241251168063692/source", "state": "file", "uid": 0} >2018-10-02 10:44:13,841 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/neutron_server_tls_proxy.json) => {"changed": true, "checksum": "20bba94ac1ce7afb7fd0793567a9fe48300d1a15", "dest": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/neutron_server_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "bda59eb8d2adeb0f47b803f83819cb93", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491453.4-3841696902162/source", "state": "file", "uid": 0} >2018-10-02 10:44:14,314 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_api.json) => {"changed": true, "checksum": "44ed45616466b118b8c77858c293e379b590863d", "dest": "/var/lib/kolla/config_files/nova_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "384513e893d6ff439145e291b5ddd786", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 403, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491453.85-69289101980795/source", "state": "file", "uid": 0} >2018-10-02 10:44:14,772 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_api_cron.json) => {"changed": true, "checksum": "9faed2be90b741cddf13fb61327173d1b58847c5", "dest": "/var/lib/kolla/config_files/nova_api_cron.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_api_cron.json", {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "ce0cf11faae2d6c4ca22fb929827d0c8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 393, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491454.32-156514286827120/source", "state": "file", "uid": 0} >2018-10-02 10:44:15,221 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_conductor.json) => {"changed": true, "checksum": "9e93d361bdd695857cfec8d32309445f8508fa80", "dest": "/var/lib/kolla/config_files/nova_conductor.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_conductor.json", {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "86ab49ff06297d94cfb501948e69aba6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 399, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491454.78-249712108966335/source", "state": "file", "uid": 0} >2018-10-02 10:44:15,673 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_consoleauth.json) => {"changed": true, "checksum": "87c1b1409f70be6c58ecff47b5ed82c4fe98a20e", "dest": "/var/lib/kolla/config_files/nova_consoleauth.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_consoleauth.json", {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "5fd5c52813e4e48f0556254cb98e6e2c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 401, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491455.23-10573978981085/source", "state": "file", "uid": 0} >2018-10-02 10:44:16,127 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_metadata.json) => {"changed": true, "checksum": "9486a72b72c8b74a8db176060403f69d46b47a43", "dest": "/var/lib/kolla/config_files/nova_metadata.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_metadata.json", {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "fcf00203f1d0e35dc5d6e3032c41f168", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 402, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491455.68-189245006048374/source", "state": "file", "uid": 0} >2018-10-02 10:44:16,579 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_placement.json) => {"changed": true, "checksum": "44ed45616466b118b8c77858c293e379b590863d", "dest": "/var/lib/kolla/config_files/nova_placement.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_placement.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "384513e893d6ff439145e291b5ddd786", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 403, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491456.14-213165561845181/source", "state": "file", "uid": 0} >2018-10-02 10:44:17,038 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_scheduler.json) => {"changed": true, "checksum": "54c5708c92f2f717a8804ec7cf58c66648398685", "dest": "/var/lib/kolla/config_files/nova_scheduler.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_scheduler.json", {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}], "md5sum": "46d2917b61186faac8a91082715ada76", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 399, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491456.59-179149822926247/source", "state": "file", "uid": 0} >2018-10-02 10:44:17,499 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/nova_vnc_proxy.json) => {"changed": true, "checksum": "1602845294bee8781ed7124c5e61794d5174a570", "dest": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/nova_vnc_proxy.json", {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "root:nova", "path": "/etc/pki/tls/private/novnc_proxy.key"}]}], "md5sum": "43607a8514595d41a4ef66df9ef5c82b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 751, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491457.05-110893755531562/source", "state": "file", "uid": 0} >2018-10-02 10:44:17,959 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/panko_api.json) => {"changed": true, "checksum": "d6e42dfd0293a2e8eb981dbb63aa49bf424c8e53", "dest": "/var/lib/kolla/config_files/panko_api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/panko_api.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}], "md5sum": "fd86d49907f869879bfac107c48a4515", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 406, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491457.51-268529740975102/source", "state": "file", "uid": 0} >2018-10-02 10:44:18,420 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/rabbitmq.json) => {"changed": true, "checksum": "a1699d6d38b070ef10a31b28e09827b21c832053", "dest": "/var/lib/kolla/config_files/rabbitmq.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/rabbitmq.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}], "md5sum": "47843c1764359e4e90142aaaaf4a712f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1295, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491457.97-27328398510690/source", "state": "file", "uid": 0} >2018-10-02 10:44:18,880 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/redis.json) => {"changed": true, "checksum": "e28b5f6e4c0c330004d1adcadc7854bb6fb6a276", "dest": "/var/lib/kolla/config_files/redis.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/redis.json", {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}]}], "md5sum": "4f3e9e8b7a99b46afad0a1c46fba9b37", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 863, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491458.43-122933470138712/source", "state": "file", "uid": 0} >2018-10-02 10:44:19,348 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/redis_tls_proxy.json) => {"changed": true, "checksum": "a5aefe3f08ebc2eb779b4b5d84f1bdcf52212da7", "dest": "/var/lib/kolla/config_files/redis_tls_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/redis_tls_proxy.json", {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"optional": true, "owner": "root:root", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "root:root", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}], "md5sum": "5fbd6db7922fa356062d34d68189986c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 834, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491458.89-115623138764634/source", "state": "file", "uid": 0} >2018-10-02 10:44:19,815 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/sahara-api.json) => {"changed": true, "checksum": "ac32d17e2d9a2ddbe9fe3f16850643ddea7b8241", "dest": "/var/lib/kolla/config_files/sahara-api.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/sahara-api.json", {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "md5sum": "55e005a0ea1189fe2fdaec2aa067c9ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 567, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491459.36-25433582449703/source", "state": "file", "uid": 0} >2018-10-02 10:44:20,277 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/sahara-engine.json) => {"changed": true, "checksum": "d1df68a467581e77f333f3b298e7468d481cd4f9", "dest": "/var/lib/kolla/config_files/sahara-engine.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/sahara-engine.json", {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}], "md5sum": "28f461b63d8f387e621270b48c173c14", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 570, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491459.82-114724679017289/source", "state": "file", "uid": 0} >2018-10-02 10:44:20,746 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_auditor.json) => {"changed": true, "checksum": "f443ddd7e1a092183f1b1bbfeb907cfa02350b8e", "dest": "/var/lib/kolla/config_files/swift_account_auditor.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_account_auditor.json", {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "fed3aeb2bc74d1bddee73605c9721620", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 286, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491460.29-252099164055023/source", "state": "file", "uid": 0} >2018-10-02 10:44:21,238 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_reaper.json) => {"changed": true, "checksum": "f738705362f769b5c58dbc9c992f47e85f1ab843", "dest": "/var/lib/kolla/config_files/swift_account_reaper.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_account_reaper.json", {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "6b53a6cd98296db748d8be17516c9ee9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491460.76-224841461591141/source", "state": "file", "uid": 0} >2018-10-02 10:44:21,720 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_replicator.json) => {"changed": true, "checksum": "ca1380e0b1137ad3d00ea1072626895f4fe49d47", "dest": "/var/lib/kolla/config_files/swift_account_replicator.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_account_replicator.json", {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "ea95efcb272cc6b7461042449930b907", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 289, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491461.25-209870014530457/source", "state": "file", "uid": 0} >2018-10-02 10:44:22,162 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_account_server.json) => {"changed": true, "checksum": "bab06bf4ffa6e74dc1350557f8a6ee04932bd706", "dest": "/var/lib/kolla/config_files/swift_account_server.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_account_server.json", {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "7759d7f076a692679c06e6ed62af4515", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491461.73-17878437964667/source", "state": "file", "uid": 0} >2018-10-02 10:44:22,626 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_auditor.json) => {"changed": true, "checksum": "0eb4f95e78f6179fffb63db4d145e159589d34bb", "dest": "/var/lib/kolla/config_files/swift_container_auditor.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_container_auditor.json", {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "5ab871921503ca9c5ae5199392c032da", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 290, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491462.17-127567601007398/source", "state": "file", "uid": 0} >2018-10-02 10:44:23,073 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_replicator.json) => {"changed": true, "checksum": "0f5cdcae0a9852bb0409d285c255b32f4e5b5aad", "dest": "/var/lib/kolla/config_files/swift_container_replicator.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_container_replicator.json", {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "8ea9581644b6b529790c49c4affb5248", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 293, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491462.63-273470426576278/source", "state": "file", "uid": 0} >2018-10-02 10:44:23,512 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_server.json) => {"changed": true, "checksum": "91edd76df109b9e85b08c44212af66ce68b703cc", "dest": "/var/lib/kolla/config_files/swift_container_server.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_container_server.json", {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "8cc885791431eef6c91a6c1795ebae5d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 289, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491463.08-218055399151708/source", "state": "file", "uid": 0} >2018-10-02 10:44:23,985 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_container_updater.json) => {"changed": true, "checksum": "5806fb41d64e1ec9927f04cee62782d0ad2220ad", "dest": "/var/lib/kolla/config_files/swift_container_updater.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_container_updater.json", {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "a11530a8453ca2aced8b757baded7afa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 290, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491463.52-15186065713137/source", "state": "file", "uid": 0} >2018-10-02 10:44:24,450 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_auditor.json) => {"changed": true, "checksum": "917b1916fd92fb4118912953f92968148232f0b4", "dest": "/var/lib/kolla/config_files/swift_object_auditor.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_object_auditor.json", {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "a46509d71c3a164b7337486fc72c21eb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 284, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491463.99-115534573648656/source", "state": "file", "uid": 0} >2018-10-02 10:44:24,916 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_expirer.json) => {"changed": true, "checksum": "bcd142a3190958913657993b2d0370b8b50d8de6", "dest": "/var/lib/kolla/config_files/swift_object_expirer.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_object_expirer.json", {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "b3e309e5012e5a0de7897d4910b743e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491464.46-196373037911420/source", "state": "file", "uid": 0} >2018-10-02 10:44:25,372 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_replicator.json) => {"changed": true, "checksum": "71861a048120b6189ee51944215bf6f35060f641", "dest": "/var/lib/kolla/config_files/swift_object_replicator.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_object_replicator.json", {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "a1a58d8bb7898e3d50d84cb6d0b6c295", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 287, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491464.92-18266587705561/source", "state": "file", "uid": 0} >2018-10-02 10:44:25,836 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_server.json) => {"changed": true, "checksum": "0fc6810e1c10d6510da93c21a7ef2f3f5da07470", "dest": "/var/lib/kolla/config_files/swift_object_server.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_object_server.json", {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}], "md5sum": "998a973b45a0b4d58ffc3846445ae2f4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 438, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491465.38-278747093301160/source", "state": "file", "uid": 0} >2018-10-02 10:44:26,286 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_object_updater.json) => {"changed": true, "checksum": "05003e9fbb4c2e4b1582d568a56a819c4c861747", "dest": "/var/lib/kolla/config_files/swift_object_updater.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_object_updater.json", {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "6ba83369ac04258591882d0ca18861b7", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 284, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491465.84-118210198644107/source", "state": "file", "uid": 0} >2018-10-02 10:44:26,759 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy.json) => {"changed": true, "checksum": "4cabf21d4f9d5c422dd56beda1075370c5c0578d", "dest": "/var/lib/kolla/config_files/swift_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_proxy.json", {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "94550794bfe1ed7707c5aa631b14664f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 281, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491466.29-177569368638159/source", "state": "file", "uid": 0} >2018-10-02 10:44:27,215 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_proxy_tls_proxy.json) => {"changed": true, "checksum": "20bba94ac1ce7afb7fd0793567a9fe48300d1a15", "dest": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "bda59eb8d2adeb0f47b803f83819cb93", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491466.77-151154842276097/source", "state": "file", "uid": 0} >2018-10-02 10:44:27,681 p=605 u=mistral | changed: [controller-0] => (item=/var/lib/kolla/config_files/swift_rsync.json) => {"changed": true, "checksum": "6ac960e4f5a1bb13c557a47292a7d63517d1b75d", "dest": "/var/lib/kolla/config_files/swift_rsync.json", "gid": 0, "group": "root", "item": ["/var/lib/kolla/config_files/swift_rsync.json", {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}], "md5sum": "f80d86d94e23c4a21e131a520023d48e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 286, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491467.22-164273050214763/source", "state": "file", "uid": 0} >2018-10-02 10:44:27,746 p=605 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-10-02 10:44:27,746 p=605 u=mistral | Tuesday 02 October 2018 10:44:27 -0400 (0:00:31.034) 0:04:58.020 ******* >2018-10-02 10:44:27,763 p=605 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 10:44:27,790 p=605 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 10:44:27,825 p=605 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-10-02 10:44:27,877 p=605 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-10-02 10:44:27,877 p=605 u=mistral | Tuesday 02 October 2018 10:44:27 -0400 (0:00:00.130) 0:04:58.150 ******* >2018-10-02 10:44:28,504 p=605 u=mistral | changed: [controller-0] => (item=step_3) => {"changed": true, "checksum": "f95a667e13f830f3654131f0f75b234e7583eada", "dest": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "gid": 0, "group": "root", "item": ["step_3", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]], "md5sum": "3cb02ed98d510494fae3b905d481887e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 444, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491468.01-95190884268861/source", "state": "file", "uid": 0} >2018-10-02 10:44:29,000 p=605 u=mistral | changed: [controller-0] => (item=step_4) => {"changed": true, "checksum": "54032a2f094e88383168daf9a4c4272527eb58c2", "dest": "/var/lib/docker-puppet/docker-puppet-tasks4.json", "gid": 0, "group": "root", "item": ["step_4", [{"config_image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", "config_volume": "cinder_init_tasks", "puppet_tags": "cinder_config,cinder_type,file,concat,file_line", "step_config": "include ::tripleo::profile::base::cinder::api", "volumes": ["/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro"]}]], "md5sum": "39336ca7617002b5943f604caee3cea5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 399, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491468.52-263156160889871/source", "state": "file", "uid": 0} >2018-10-02 10:44:29,030 p=605 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-10-02 10:44:29,030 p=605 u=mistral | Tuesday 02 October 2018 10:44:29 -0400 (0:00:01.153) 0:04:59.303 ******* >2018-10-02 10:44:29,063 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:29,095 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:29,112 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:29,140 p=605 u=mistral | TASK [Check for /etc/puppet/check-mode directory for check mode] *************** >2018-10-02 10:44:29,140 p=605 u=mistral | Tuesday 02 October 2018 10:44:29 -0400 (0:00:00.110) 0:04:59.414 ******* >2018-10-02 10:44:29,172 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:29,204 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:29,216 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:29,244 p=605 u=mistral | TASK [Create /etc/puppet/check-mode/hieradata directory for check mode] ******** >2018-10-02 10:44:29,244 p=605 u=mistral | Tuesday 02 October 2018 10:44:29 -0400 (0:00:00.104) 0:04:59.518 ******* >2018-10-02 10:44:29,278 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:29,311 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:29,341 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:29,372 p=605 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-10-02 10:44:29,372 p=605 u=mistral | Tuesday 02 October 2018 10:44:29 -0400 (0:00:00.127) 0:04:59.645 ******* >2018-10-02 10:44:29,994 p=605 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491469.5-262776450406499/source", "state": "file", "uid": 0} >2018-10-02 10:44:30,074 p=605 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491469.53-156568363334486/source", "state": "file", "uid": 0} >2018-10-02 10:44:30,094 p=605 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1538491469.57-77340244375073/source", "state": "file", "uid": 0} >2018-10-02 10:44:30,122 p=605 u=mistral | TASK [Create puppet check-mode files if they don't exist for check mode] ******* >2018-10-02 10:44:30,123 p=605 u=mistral | Tuesday 02 October 2018 10:44:30 -0400 (0:00:00.750) 0:05:00.396 ******* >2018-10-02 10:44:30,157 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:30,189 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:30,201 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:44:30,232 p=605 u=mistral | TASK [Run puppet host configuration for step 1] ******************************** >2018-10-02 10:44:30,232 p=605 u=mistral | Tuesday 02 October 2018 10:44:30 -0400 (0:00:00.109) 0:05:00.506 ******* >2018-10-02 10:44:45,971 p=605 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 10:44:48,428 p=605 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 10:45:57,827 p=605 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-10-02 10:45:57,857 p=605 u=mistral | TASK [Debug output for task: Run puppet host configuration for step 1] ********* >2018-10-02 10:45:57,858 p=605 u=mistral | Tuesday 02 October 2018 10:45:57 -0400 (0:01:27.625) 0:06:28.131 ******* >2018-10-02 10:45:57,938 p=605 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.24 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}f47467dc7908161e5e0e39e67daa454e'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/owner: owner changed 'root' to 'hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/group: group changed 'root' to 'haclient'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/mode: mode changed '0755' to '0750'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/content: content changed '{md5}48e4efbb5b474620b9d2e67ef6cc1df9' to '{md5}85274b5d58af3572868d4ef10722b50f'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/mode: mode changed '0400' to '0640'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 76.15 seconds", > "Changes:", > " Total: 169", > "Events:", > " Success: 169", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 215", > " Restarted: 5", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " User: 0.04", > " Sysctl: 0.15", > " Sysctl runtime: 0.20", > " File: 0.27", > " Package: 0.44", > " Pcmk property: 1.08", > " Firewall: 14.64", > " Last run: 1538491557", > " Service: 2.60", > " Config retrieval: 3.85", > " Exec: 53.47", > " Concat fragment: 0.00", > " Total: 76.77", > "Version:", > " Config: 1538491477", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 10:45:57,964 p=605 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.09 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Tuned/Exec[tuned-adm]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}f47467dc7908161e5e0e39e67daa454e'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 8.47 seconds", > "Changes:", > " Total: 99", > "Events:", > " Success: 99", > "Resources:", > " Total: 140", > " Restarted: 3", > " Out of sync: 99", > " Changed: 99", > "Time:", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " File: 0.12", > " Sysctl: 0.15", > " Sysctl runtime: 0.20", > " Package: 0.24", > " Service: 1.17", > " Last run: 1538491488", > " Firewall: 2.20", > " Config retrieval: 2.39", > " Exec: 2.98", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Total: 9.48", > "Version:", > " Config: 1538491477", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 10:45:57,997 p=605 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.70 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}f47467dc7908161e5e0e39e67daa454e'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 6.50 seconds", > "Changes:", > " Total: 92", > "Events:", > " Success: 92", > "Resources:", > " Total: 134", > " Restarted: 3", > " Out of sync: 92", > " Changed: 92", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.05", > " File: 0.20", > " Sysctl runtime: 0.20", > " Package: 0.23", > " Service: 1.31", > " Firewall: 1.39", > " Config retrieval: 1.94", > " Exec: 1.98", > " Last run: 1538491485", > " Total: 7.34", > " Concat fragment: 0.00", > "Version:", > " Config: 1538491477", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-10-02 10:45:58,032 p=605 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 1] ***************** >2018-10-02 10:45:58,033 p=605 u=mistral | Tuesday 02 October 2018 10:45:58 -0400 (0:00:00.175) 0:06:28.306 ******* >2018-10-02 10:46:21,409 p=605 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:46:57,212 p=605 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:48:53,898 p=605 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:48:53,929 p=605 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (generate config) during step 1] *** >2018-10-02 10:48:53,929 p=605 u=mistral | Tuesday 02 October 2018 10:48:53 -0400 (0:02:55.896) 0:09:24.202 ******* >2018-10-02 10:48:54,139 p=605 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-10-02 14:45:58,360 INFO: 16767 -- Running docker-puppet", > "2018-10-02 14:45:58,360 DEBUG: 16767 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-10-02 14:45:58,360 DEBUG: 16767 -- config_volume crond", > "2018-10-02 14:45:58,361 DEBUG: 16767 -- puppet_tags ", > "2018-10-02 14:45:58,361 DEBUG: 16767 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 14:45:58,361 DEBUG: 16767 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:45:58,361 DEBUG: 16767 -- volumes []", > "2018-10-02 14:45:58,361 DEBUG: 16767 -- Adding new service", > "2018-10-02 14:45:58,361 INFO: 16767 -- Service compilation completed.", > "2018-10-02 14:45:58,362 DEBUG: 16767 -- CHECK_MODE: 0", > "2018-10-02 14:45:58,362 DEBUG: 16767 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,362 INFO: 16767 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-10-02 14:45:58,375 INFO: 16768 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:45:58,376 DEBUG: 16768 -- config_volume crond", > "2018-10-02 14:45:58,376 DEBUG: 16768 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-10-02 14:45:58,376 DEBUG: 16768 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 14:45:58,376 DEBUG: 16768 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:45:58,376 DEBUG: 16768 -- volumes []", > "2018-10-02 14:45:58,376 DEBUG: 16768 -- check_mode 0", > "2018-10-02 14:45:58,377 INFO: 16768 -- Removing container: docker-puppet-crond", > "2018-10-02 14:45:58,463 INFO: 16768 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:46:12,563 DEBUG: 16768 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "0f4899fadd7f: Pulling fs layer", > "4d80de3c75a6: Pulling fs layer", > "4d80de3c75a6: Waiting", > "e17262bc2341: Verifying Checksum", > "e17262bc2341: Download complete", > "4d80de3c75a6: Verifying Checksum", > "4d80de3c75a6: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "0f4899fadd7f: Verifying Checksum", > "0f4899fadd7f: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "0f4899fadd7f: Pull complete", > "4d80de3c75a6: Pull complete", > "Digest: sha256:d7abfe49c737904a24b4da901cd357c6a9aba94959e6be50bdb830a6a32fec7b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "", > "2018-10-02 14:46:12,567 DEBUG: 16768 -- NET_HOST enabled", > "2018-10-02 14:46:12,568 DEBUG: 16768 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=ceph-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpS6CK5R:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:46:21,232 DEBUG: 16768 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 0.54 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}f121ac457cb6e71964450c8cbc0a2431'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.67", > " Total: 0.68", > " Last run: 1538491580", > "Version:", > " Config: 1538491579", > " Puppet: 4.8.2", > "Gathering files modified after 2018-10-02 14:46:12.821634177 +0000", > "2018-10-02 14:46:21,233 DEBUG: 16768 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ CHECK_MODE=", > "+ '[' -d /tmp/puppet-check-mode ']'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=ceph-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:12.821634177 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ EXCLUDE='--exclude=*/etc/swift/backups/* --exclude=*/etc/swift/*.ring.gz --exclude=*/etc/swift/*.builder --exclude=*/etc/libvirt/passwd.db'", > "+ tar xO", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/crond", > "+ sed '/^#.*HEADER.*/d'", > "tar: Removing leading `/' from member names", > "+ md5sum", > "+ awk '{print $1}'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-10-02 14:46:21,233 INFO: 16768 -- Removing container: docker-puppet-crond", > "2018-10-02 14:46:21,268 DEBUG: 16768 -- docker-puppet-crond", > "2018-10-02 14:46:21,269 INFO: 16768 -- Finished processing puppet configs for crond", > "2018-10-02 14:46:21,269 DEBUG: 16767 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-10-02 14:46:21,269 DEBUG: 16767 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-10-02 14:46:21,272 DEBUG: 16767 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 14:46:21,272 DEBUG: 16767 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 14:46:21,272 DEBUG: 16767 -- Updating config hash for logrotate_crond, config_volume=crond hash=6f2a5e23a896d70ebbc2c66d87cd9266" > ] >} >2018-10-02 10:48:54,188 p=605 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-10-02 14:45:58,288 INFO: 18770 -- Running docker-puppet", > "2018-10-02 14:45:58,288 DEBUG: 18770 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-10-02 14:45:58,289 DEBUG: 18770 -- config_volume ceilometer", > "2018-10-02 14:45:58,289 DEBUG: 18770 -- puppet_tags ceilometer_config", > "2018-10-02 14:45:58,289 DEBUG: 18770 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "", > "2018-10-02 14:45:58,289 DEBUG: 18770 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:45:58,289 DEBUG: 18770 -- volumes []", > "2018-10-02 14:45:58,289 DEBUG: 18770 -- Adding new service", > "2018-10-02 14:45:58,290 DEBUG: 18770 -- config_volume neutron", > "2018-10-02 14:45:58,290 DEBUG: 18770 -- puppet_tags neutron_plugin_ml2", > "2018-10-02 14:45:58,290 DEBUG: 18770 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-10-02 14:45:58,290 DEBUG: 18770 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:45:58,290 DEBUG: 18770 -- volumes []", > "2018-10-02 14:45:58,290 DEBUG: 18770 -- Adding new service", > "2018-10-02 14:45:58,290 DEBUG: 18770 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-10-02 14:45:58,290 DEBUG: 18770 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-10-02 14:45:58,290 DEBUG: 18770 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- config_volume iscsid", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- puppet_tags iscsid_config", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- manifest include ::tripleo::profile::base::iscsid", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- Adding new service", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- config_volume nova_libvirt", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- puppet_tags nova_config,nova_paste_api_ini", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "# We'll probably treat it like we do with Neutron plugins.", > "# Until then, just include it in the default nova-compute role.", > "include tripleo::profile::base::nova::compute::libvirt", > "include ::tripleo::profile::base::database::mysql::client", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 14:45:58,291 DEBUG: 18770 -- volumes []", > "2018-10-02 14:45:58,292 DEBUG: 18770 -- Adding new service", > "2018-10-02 14:45:58,292 DEBUG: 18770 -- config_volume nova_libvirt", > "2018-10-02 14:45:58,292 DEBUG: 18770 -- puppet_tags libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-10-02 14:45:58,292 DEBUG: 18770 -- manifest include tripleo::profile::base::nova::libvirt", > "2018-10-02 14:45:58,292 DEBUG: 18770 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 14:45:58,292 DEBUG: 18770 -- volumes []", > "2018-10-02 14:45:58,292 DEBUG: 18770 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,292 DEBUG: 18770 -- puppet_tags ", > "2018-10-02 14:45:58,292 DEBUG: 18770 -- manifest include ::tripleo::profile::base::sshd", > "include tripleo::profile::base::nova::migration::target", > "2018-10-02 14:45:58,293 DEBUG: 18770 -- volumes []", > "2018-10-02 14:45:58,293 DEBUG: 18770 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,293 DEBUG: 18770 -- config_volume crond", > "2018-10-02 14:45:58,293 DEBUG: 18770 -- puppet_tags ", > "2018-10-02 14:45:58,293 DEBUG: 18770 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 14:45:58,293 DEBUG: 18770 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:45:58,293 DEBUG: 18770 -- Adding new service", > "2018-10-02 14:45:58,293 INFO: 18770 -- Service compilation completed.", > "2018-10-02 14:45:58,294 DEBUG: 18770 -- CHECK_MODE: 0", > "2018-10-02 14:45:58,294 DEBUG: 18770 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,294 DEBUG: 18770 -- - [u'nova_libvirt', u'file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password', u\"# TODO(emilien): figure how to deal with libvirt profile.\\n# We'll probably treat it like we do with Neutron plugins.\\n# Until then, just include it in the default nova-compute role.\\ninclude tripleo::profile::base::nova::compute::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sshd\\ninclude tripleo::profile::base::nova::migration::target\", u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,294 DEBUG: 18770 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 18770 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 0]", > "2018-10-02 14:45:58,295 DEBUG: 18770 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1', [u'/etc/iscsi:/etc/iscsi'], 0]", > "2018-10-02 14:45:58,295 INFO: 18770 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-10-02 14:45:58,306 INFO: 18771 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:45:58,306 DEBUG: 18771 -- config_volume ceilometer", > "2018-10-02 14:45:58,307 DEBUG: 18771 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config", > "2018-10-02 14:45:58,307 DEBUG: 18771 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-10-02 14:45:58,306 INFO: 18772 -- Starting configuration of nova_libvirt using image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 14:45:58,307 DEBUG: 18771 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:45:58,307 DEBUG: 18771 -- volumes []", > "2018-10-02 14:45:58,307 DEBUG: 18771 -- check_mode 0", > "2018-10-02 14:45:58,307 DEBUG: 18772 -- config_volume nova_libvirt", > "2018-10-02 14:45:58,307 DEBUG: 18772 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-10-02 14:45:58,307 DEBUG: 18772 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "include tripleo::profile::base::nova::libvirt", > "include ::tripleo::profile::base::sshd", > "2018-10-02 14:45:58,307 DEBUG: 18772 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 14:45:58,307 DEBUG: 18772 -- volumes []", > "2018-10-02 14:45:58,307 DEBUG: 18772 -- check_mode 0", > "2018-10-02 14:45:58,308 INFO: 18771 -- Removing container: docker-puppet-ceilometer", > "2018-10-02 14:45:58,308 INFO: 18772 -- Removing container: docker-puppet-nova_libvirt", > "2018-10-02 14:45:58,309 INFO: 18773 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:45:58,309 DEBUG: 18773 -- config_volume crond", > "2018-10-02 14:45:58,309 DEBUG: 18773 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-10-02 14:45:58,309 DEBUG: 18773 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 14:45:58,309 DEBUG: 18773 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:45:58,309 DEBUG: 18773 -- volumes []", > "2018-10-02 14:45:58,309 DEBUG: 18773 -- check_mode 0", > "2018-10-02 14:45:58,310 INFO: 18773 -- Removing container: docker-puppet-crond", > "2018-10-02 14:45:58,408 INFO: 18772 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 14:45:58,409 INFO: 18773 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:45:58,413 INFO: 18771 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:46:13,321 DEBUG: 18773 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "0f4899fadd7f: Pulling fs layer", > "4d80de3c75a6: Pulling fs layer", > "4d80de3c75a6: Waiting", > "e17262bc2341: Verifying Checksum", > "e17262bc2341: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "0f4899fadd7f: Verifying Checksum", > "0f4899fadd7f: Download complete", > "4d80de3c75a6: Verifying Checksum", > "4d80de3c75a6: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "0f4899fadd7f: Pull complete", > "4d80de3c75a6: Pull complete", > "Digest: sha256:d7abfe49c737904a24b4da901cd357c6a9aba94959e6be50bdb830a6a32fec7b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:46:13,325 DEBUG: 18773 -- NET_HOST enabled", > "2018-10-02 14:46:13,326 DEBUG: 18773 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp9Le93h:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:46:19,268 DEBUG: 18771 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "ff59208988ad: Pulling fs layer", > "5fcda0d83a5e: Pulling fs layer", > "2142eca15b92: Pulling fs layer", > "ff59208988ad: Waiting", > "5fcda0d83a5e: Waiting", > "2142eca15b92: Waiting", > "ff59208988ad: Verifying Checksum", > "ff59208988ad: Download complete", > "5fcda0d83a5e: Verifying Checksum", > "5fcda0d83a5e: Download complete", > "2142eca15b92: Verifying Checksum", > "2142eca15b92: Download complete", > "ff59208988ad: Pull complete", > "5fcda0d83a5e: Pull complete", > "2142eca15b92: Pull complete", > "Digest: sha256:ba6a24fd5b438c2530cbd903d1b4616e6075f146618be39391273ae43949bbad", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:46:19,271 DEBUG: 18771 -- NET_HOST enabled", > "2018-10-02 14:46:19,271 DEBUG: 18771 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config --env NAME=ceilometer --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp8GyiKi:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:46:22,125 DEBUG: 18773 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.48 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}f121ac457cb6e71964450c8cbc0a2431'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.01", > " Cron: 0.01", > " Config retrieval: 0.58", > " Total: 0.59", > " Last run: 1538491581", > "Version:", > " Config: 1538491580", > " Puppet: 4.8.2", > "Gathering files modified after 2018-10-02 14:46:13.675835580 +0000", > "2018-10-02 14:46:22,126 DEBUG: 18773 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ CHECK_MODE=", > "+ '[' -d /tmp/puppet-check-mode ']'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=compute-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:13.675835580 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ EXCLUDE='--exclude=*/etc/swift/backups/* --exclude=*/etc/swift/*.ring.gz --exclude=*/etc/swift/*.builder --exclude=*/etc/libvirt/passwd.db'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/crond", > "+ sed '/^#.*HEADER.*/d'", > "+ tar xO", > "tar: Removing leading `/' from member names", > "+ awk '{print $1}'", > "+ md5sum", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-10-02 14:46:22,126 INFO: 18773 -- Removing container: docker-puppet-crond", > "2018-10-02 14:46:22,194 DEBUG: 18773 -- docker-puppet-crond", > "2018-10-02 14:46:22,194 INFO: 18773 -- Finished processing puppet configs for crond", > "2018-10-02 14:46:22,195 INFO: 18773 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:46:22,195 DEBUG: 18773 -- config_volume neutron", > "2018-10-02 14:46:22,195 DEBUG: 18773 -- puppet_tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-10-02 14:46:22,195 DEBUG: 18773 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "include ::tripleo::profile::base::neutron::ovs", > "2018-10-02 14:46:22,195 DEBUG: 18773 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:46:22,195 DEBUG: 18773 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-10-02 14:46:22,195 DEBUG: 18773 -- check_mode 0", > "2018-10-02 14:46:22,197 INFO: 18773 -- Removing container: docker-puppet-neutron", > "2018-10-02 14:46:22,276 INFO: 18773 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:46:29,777 DEBUG: 18771 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.18 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.70 seconds", > " Total: 24", > " Success: 24", > " Total: 139", > " Skipped: 22", > " Out of sync: 24", > " Changed: 24", > " Ceilometer config: 0.59", > " Config retrieval: 1.39", > " Total: 1.98", > " Last run: 1538491588", > " Resources: 0.00", > " Config: 1538491586", > "Gathering files modified after 2018-10-02 14:46:19.533827898 +0000", > "2018-10-02 14:46:29,778 DEBUG: 18771 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config /etc/config.pp", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: Scope(Class[Ceilometer::Dispatcher::Gnocchi]): The class ceilometer::dispatcher::gnocchi is deprecated. All its", > " options must be set as url parameters in", > " ceilometer::agent::notification::pipeline_publishers. Depending of the used", > " Gnocchi version their might be ignored.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:19.533827898 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/ceilometer", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-10-02 14:46:29,778 INFO: 18771 -- Removing container: docker-puppet-ceilometer", > "2018-10-02 14:46:29,819 DEBUG: 18771 -- docker-puppet-ceilometer", > "2018-10-02 14:46:29,819 INFO: 18771 -- Finished processing puppet configs for ceilometer", > "2018-10-02 14:46:29,820 INFO: 18771 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:46:29,820 DEBUG: 18771 -- config_volume iscsid", > "2018-10-02 14:46:29,820 DEBUG: 18771 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-10-02 14:46:29,820 DEBUG: 18771 -- manifest include ::tripleo::profile::base::iscsid", > "2018-10-02 14:46:29,820 DEBUG: 18771 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:46:29,820 DEBUG: 18771 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-10-02 14:46:29,820 DEBUG: 18771 -- check_mode 0", > "2018-10-02 14:46:29,822 INFO: 18771 -- Removing container: docker-puppet-iscsid", > "2018-10-02 14:46:29,918 INFO: 18771 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:46:30,218 DEBUG: 18773 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "f3c66d22e08b: Pulling fs layer", > "6cca3e1c80e1: Pulling fs layer", > "d405f46408bf: Pulling fs layer", > "d405f46408bf: Verifying Checksum", > "d405f46408bf: Download complete", > "6cca3e1c80e1: Verifying Checksum", > "6cca3e1c80e1: Download complete", > "f3c66d22e08b: Verifying Checksum", > "f3c66d22e08b: Download complete", > "f3c66d22e08b: Pull complete", > "6cca3e1c80e1: Pull complete", > "d405f46408bf: Pull complete", > "Digest: sha256:0c7ace86b7c08a5ec94dbf283b5a7a95f0678caf8c830185bcfc7a5dbaec5704", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:46:30,222 DEBUG: 18773 -- NET_HOST enabled", > "2018-10-02 14:46:30,222 DEBUG: 18773 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpgHIDMr:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:46:30,886 DEBUG: 18771 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "2afcd4790b43: Pulling fs layer", > "2afcd4790b43: Download complete", > "2afcd4790b43: Pull complete", > "Digest: sha256:b516e920a95255994d6493d4a922af867754e570e2afe8afeaa5c2f3e25a6d94", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:46:30,890 DEBUG: 18771 -- NET_HOST enabled", > "2018-10-02 14:46:30,891 DEBUG: 18771 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpZZMbzz:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:46:35,916 DEBUG: 18772 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-compute ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-compute", > "9e28a9d49d0f: Pulling fs layer", > "eff4ef11e8d6: Pulling fs layer", > "9e28a9d49d0f: Waiting", > "eff4ef11e8d6: Waiting", > "9e28a9d49d0f: Verifying Checksum", > "9e28a9d49d0f: Download complete", > "eff4ef11e8d6: Verifying Checksum", > "eff4ef11e8d6: Download complete", > "9e28a9d49d0f: Pull complete", > "eff4ef11e8d6: Pull complete", > "Digest: sha256:9cbbdf47aea4339ed69ccc5d376981d41ee8a96efdf03e25708c9cf540b0c4ac", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 14:46:35,919 DEBUG: 18772 -- NET_HOST enabled", > "2018-10-02 14:46:35,919 DEBUG: 18772 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_libvirt --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpVdOfBz:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-26.1", > "2018-10-02 14:46:38,583 DEBUG: 18771 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.53 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > " Total: 10", > " Skipped: 8", > " File: 0.00", > " Exec: 0.02", > " Config retrieval: 0.60", > " Total: 0.63", > " Last run: 1538491597", > " Config: 1538491597", > "Gathering files modified after 2018-10-02 14:46:31.164813620 +0000", > "2018-10-02 14:46:38,583 DEBUG: 18771 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:31.164813620 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/iscsid", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-10-02 14:46:38,583 INFO: 18771 -- Removing container: docker-puppet-iscsid", > "2018-10-02 14:46:38,621 DEBUG: 18771 -- docker-puppet-iscsid", > "2018-10-02 14:46:38,622 INFO: 18771 -- Finished processing puppet configs for iscsid", > "2018-10-02 14:46:41,834 DEBUG: 18773 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.53 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 0.84 seconds", > " Total: 45", > " Success: 45", > " Total: 175", > " Skipped: 27", > " Out of sync: 45", > " Changed: 45", > " Neutron agent ovs: 0.03", > " Neutron plugin ml2: 0.08", > " Neutron config: 0.58", > " Last run: 1538491600", > " Config retrieval: 2.78", > " Total: 3.47", > " Config: 1538491596", > "Gathering files modified after 2018-10-02 14:46:30.587814307 +0000", > "2018-10-02 14:46:41,835 DEBUG: 18773 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 492]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/plugins/ml2.pp\", 53]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 136]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync_srcs+=' /var/www'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:30.587814307 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/neutron", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-10-02 14:46:41,835 INFO: 18773 -- Removing container: docker-puppet-neutron", > "2018-10-02 14:46:41,884 DEBUG: 18773 -- docker-puppet-neutron", > "2018-10-02 14:46:41,884 INFO: 18773 -- Finished processing puppet configs for neutron", > "2018-10-02 14:46:57,010 DEBUG: 18772 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.98 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{md5}056b96e7e8124e1bc55f77cba4e68ce7' to '{md5}b308b1b1aab82c160024dac0f6ad10ca'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{md5}09c4fa846e8e27bfa3ab3325900d63ea' to '{md5}2f138c0278e1b666ec77a6d8ba3054a1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{md5}dff145cb4e519333c0096aae8de2e77c' to '{md5}6fdbf752a1ce3b21f1303d4e498607a1'", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/heal_instance_info_cache_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/vncserver_proxyclient_address]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/keymap]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[glance/verify_glance_signatures]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tls]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tcp]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{md5}bd4018244d6d12704b4681795c9abf60'", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/vncserver_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_group]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_ro]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_rw]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_ro_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_rw_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}40d961cd3154f0439fcac1a50bd77b96' to '{md5}4ef3001e7d55489751e718ff542ca32b'", > "Notice: Applied catalog in 9.34 seconds", > " Total: 108", > " Success: 108", > " Changed: 108", > " Out of sync: 108", > " Total: 324", > " Skipped: 48", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File line: 0.00", > " Libvirtd config: 0.02", > " File: 0.04", > " Package: 0.09", > " Augeas: 1.13", > " Total: 12.19", > " Last run: 1538491615", > " Config retrieval: 3.39", > " Nova config: 7.51", > " Config: 1538491602", > "Gathering files modified after 2018-10-02 14:46:36.128807702 +0000", > "2018-10-02 14:46:57,010 DEBUG: 18772 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password'", > "+ origin_of_time=/var/lib/config-data/nova_libvirt.origin_of_time", > "+ touch /var/lib/config-data/nova_libvirt.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 551]", > "Warning: Scope(Class[Nova]): nova::use_syslog, nova::use_stderr, nova::log_facility, nova::log_dir \\", > "and nova::debug is deprecated and has been moved to nova::logging class, please set them there.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 561]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Unknown variable: '::nova::vncproxy::host'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:31:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_protocol'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:36:5", > "Warning: Unknown variable: '::nova::vncproxy::port'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:41:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_path'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:46:5", > "Warning: Unknown variable: '::nova::compute::pci_passthrough'. at /etc/puppet/modules/nova/manifests/compute/pci.pp:19:38", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/compute/libvirt.pp\", 278]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute/libvirt.pp\", 33]", > " with Stdlib::Compat::Ip_Address. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Exec[set libvirt sasl credentials](provider=posix): Cannot understand environment setting \"TLS_PASSWORD=\"", > "+ rsync_srcs+=' /var/lib/nova/.ssh'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/nova/.ssh /var/lib/config-data/nova_libvirt", > "++ stat -c %y /var/lib/config-data/nova_libvirt.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:36.128807702 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_libvirt", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_libvirt", > "++ find /etc /root /opt /var/spool/cron /var/lib/nova/.ssh -newer /var/lib/config-data/nova_libvirt.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/nova_libvirt", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/nova_libvirt --mtime=1970-01-01", > "2018-10-02 14:46:57,010 INFO: 18772 -- Removing container: docker-puppet-nova_libvirt", > "2018-10-02 14:46:57,056 DEBUG: 18772 -- docker-puppet-nova_libvirt", > "2018-10-02 14:46:57,056 INFO: 18772 -- Finished processing puppet configs for nova_libvirt", > "2018-10-02 14:46:57,056 DEBUG: 18770 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-10-02 14:46:57,057 DEBUG: 18770 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-10-02 14:46:57,059 DEBUG: 18770 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:46:57,059 DEBUG: 18770 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:46:57,059 DEBUG: 18770 -- Updating config hash for neutron_ovs_bridge, config_volume=iscsid hash=0c19dc2dfbb0a97abba8f3423f64c0e1", > "2018-10-02 14:46:57,060 DEBUG: 18770 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-10-02 14:46:57,060 DEBUG: 18770 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-10-02 14:46:57,060 DEBUG: 18770 -- Updating config hash for nova_libvirt, config_volume=iscsid hash=d18d92b1d68af0f95b0b06adb0a6b38d", > "2018-10-02 14:46:57,060 DEBUG: 18770 -- Updating config hash for nova_virtlogd, config_volume=iscsid hash=d18d92b1d68af0f95b0b06adb0a6b38d", > "2018-10-02 14:46:57,061 DEBUG: 18770 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 14:46:57,061 DEBUG: 18770 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 14:46:57,061 DEBUG: 18770 -- Updating config hash for ceilometer_agent_compute, config_volume=iscsid hash=430e7eee831141fd19551e1acc0fbaf3", > "2018-10-02 14:46:57,061 DEBUG: 18770 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt/etc", > "2018-10-02 14:46:57,061 DEBUG: 18770 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:46:57,061 DEBUG: 18770 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:46:57,062 DEBUG: 18770 -- Updating config hash for neutron_ovs_agent, config_volume=iscsid hash=0c19dc2dfbb0a97abba8f3423f64c0e1", > "2018-10-02 14:46:57,062 DEBUG: 18770 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-10-02 14:46:57,062 DEBUG: 18770 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-10-02 14:46:57,062 DEBUG: 18770 -- Updating config hash for nova_migration_target, config_volume=iscsid hash=d18d92b1d68af0f95b0b06adb0a6b38d", > "2018-10-02 14:46:57,062 DEBUG: 18770 -- Updating config hash for nova_compute, config_volume=iscsid hash=d18d92b1d68af0f95b0b06adb0a6b38d", > "2018-10-02 14:46:57,062 DEBUG: 18770 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 14:46:57,062 DEBUG: 18770 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 14:46:57,062 DEBUG: 18770 -- Updating config hash for logrotate_crond, config_volume=iscsid hash=6f2a5e23a896d70ebbc2c66d87cd9266" > ] >} >2018-10-02 10:48:55,242 p=605 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-10-02 14:45:58,281 INFO: 28748 -- Running docker-puppet", > "2018-10-02 14:45:58,281 DEBUG: 28748 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-10-02 14:45:58,282 DEBUG: 28748 -- config_volume aodh", > "2018-10-02 14:45:58,282 DEBUG: 28748 -- puppet_tags aodh_api_paste_ini,aodh_config", > "2018-10-02 14:45:58,282 DEBUG: 28748 -- manifest include tripleo::profile::base::aodh::api", > "", > "include ::tripleo::profile::base::database::mysql::client", > "2018-10-02 14:45:58,282 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 14:45:58,282 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,282 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,282 DEBUG: 28748 -- puppet_tags aodh_config", > "2018-10-02 14:45:58,282 DEBUG: 28748 -- manifest include tripleo::profile::base::aodh::evaluator", > "2018-10-02 14:45:58,282 DEBUG: 28748 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,282 DEBUG: 28748 -- manifest include tripleo::profile::base::aodh::listener", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- config_volume aodh", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- puppet_tags aodh_config", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- manifest include tripleo::profile::base::aodh::notifier", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- config_volume ceilometer", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- puppet_tags ceilometer_config", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- manifest include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-10-02 14:45:58,283 DEBUG: 28748 -- config_volume cinder", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- puppet_tags cinder_config,cinder_type,file,concat,file_line", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- manifest include ::tripleo::profile::base::cinder::api", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- config_volume cinder", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- puppet_tags cinder_config,file,concat,file_line", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- manifest include ::tripleo::profile::base::cinder::backup::ceph", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- manifest include ::tripleo::profile::base::cinder::scheduler", > "2018-10-02 14:45:58,284 DEBUG: 28748 -- manifest include ::tripleo::profile::base::lvm", > "include ::tripleo::profile::base::cinder::volume", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- config_volume clustercheck", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- puppet_tags file", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- config_volume glance_api", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- puppet_tags glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- manifest include ::tripleo::profile::base::glance::api", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- config_volume gnocchi", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- puppet_tags gnocchi_api_paste_ini,gnocchi_config", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- manifest include ::tripleo::profile::base::gnocchi::api", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- puppet_tags gnocchi_config", > "2018-10-02 14:45:58,285 DEBUG: 28748 -- manifest include ::tripleo::profile::base::gnocchi::metricd", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- config_volume gnocchi", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- puppet_tags gnocchi_config", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- manifest include ::tripleo::profile::base::gnocchi::statsd", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- config_volume haproxy", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- puppet_tags haproxy_config", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}", > "['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::pacemaker::haproxy_bundle", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- config_volume heat_api", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- puppet_tags heat_config,file,concat,file_line", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- manifest include ::tripleo::profile::base::heat::api", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:45:58,286 DEBUG: 28748 -- config_volume heat_api_cfn", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- config_volume heat", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- puppet_tags heat_config,file,concat,file_line", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- config_volume horizon", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- puppet_tags horizon_config", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- manifest include ::tripleo::profile::base::horizon", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- config_volume iscsid", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- puppet_tags iscsid_config", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- manifest include ::tripleo::profile::base::iscsid", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-10-02 14:45:58,287 DEBUG: 28748 -- config_volume keystone", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- puppet_tags keystone_config,keystone_domain_config", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::keystone", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- config_volume memcached", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- puppet_tags file", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- manifest include ::tripleo::profile::base::memcached", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- config_volume mysql", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "exec {'wait-for-settle': command => '/bin/true' }", > "include ::tripleo::profile::pacemaker::database::mysql_bundle", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- config_volume neutron", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- puppet_tags neutron_config,neutron_api_config", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- manifest include tripleo::profile::base::neutron::server", > "2018-10-02 14:45:58,288 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- config_volume neutron", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- puppet_tags neutron_plugin_ml2", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- puppet_tags neutron_config,neutron_dhcp_agent_config", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- manifest include tripleo::profile::base::neutron::dhcp", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- puppet_tags neutron_config,neutron_l3_agent_config", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- manifest include tripleo::profile::base::neutron::l3", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- puppet_tags neutron_config,neutron_metadata_agent_config", > "2018-10-02 14:45:58,289 DEBUG: 28748 -- manifest include tripleo::profile::base::neutron::metadata", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- config_volume neutron", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- config_volume nova", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- puppet_tags nova_config", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::api", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,290 DEBUG: 28748 -- manifest include tripleo::profile::base::nova::conductor", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- manifest include tripleo::profile::base::nova::consoleauth", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- config_volume nova_placement", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- puppet_tags nova_config", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- manifest include tripleo::profile::base::nova::placement", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- config_volume nova", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- manifest include tripleo::profile::base::nova::scheduler", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- manifest include tripleo::profile::base::nova::vncproxy", > "2018-10-02 14:45:58,291 DEBUG: 28748 -- config_volume crond", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- puppet_tags ", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- config_volume panko", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- puppet_tags panko_api_paste_ini,panko_config", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- manifest include tripleo::profile::base::panko::api", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- config_volume rabbitmq", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- puppet_tags file", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::rabbitmq", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- config_volume redis", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- puppet_tags exec", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-10-02 14:45:58,292 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- config_volume sahara", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- puppet_tags sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- manifest include ::tripleo::profile::base::sahara::api", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- puppet_tags sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- manifest include ::tripleo::profile::base::sahara::engine", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- config_volume swift", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- puppet_tags swift_config,swift_proxy_config,swift_keymaster_config", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- manifest include ::tripleo::profile::base::swift::proxy", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- config_volume swift_ringbuilder", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- puppet_tags exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-10-02 14:45:58,293 DEBUG: 28748 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-10-02 14:45:58,294 DEBUG: 28748 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:45:58,294 DEBUG: 28748 -- volumes []", > "2018-10-02 14:45:58,294 DEBUG: 28748 -- Adding new service", > "2018-10-02 14:45:58,294 DEBUG: 28748 -- config_volume swift", > "2018-10-02 14:45:58,294 DEBUG: 28748 -- puppet_tags swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-10-02 14:45:58,294 DEBUG: 28748 -- manifest include ::tripleo::profile::base::swift::storage", > "class xinetd() {}", > "2018-10-02 14:45:58,294 DEBUG: 28748 -- Existing service, appending puppet tags and manifest", > "2018-10-02 14:45:58,294 INFO: 28748 -- Service compilation completed.", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- CHECK_MODE: 0", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'nova_placement', u'file,file_line,concat,augeas,cron,nova_config', u'include tripleo::profile::base::nova::placement\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'aodh', u'file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config', u'include tripleo::profile::base::aodh::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::evaluator\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::listener\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::notifier\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'heat_api', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'swift_ringbuilder', u'file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', u'include ::tripleo::profile::base::swift::ringbuilder', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'sahara', u'file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template', u'include ::tripleo::profile::base::sahara::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sahara::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'mysql', u'file,file_line,concat,augeas,cron,file', u\"['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\\nexec {'wait-for-settle': command => '/bin/true' }\\ninclude ::tripleo::profile::pacemaker::database::mysql_bundle\", u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'gnocchi', u'file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config', u'include ::tripleo::profile::base::gnocchi::api\\n\\ninclude ::tripleo::profile::base::gnocchi::metricd\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::gnocchi::statsd\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'clustercheck', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::pacemaker::clustercheck', u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'redis', u'file,file_line,concat,augeas,cron,exec', u'include ::tripleo::profile::pacemaker::database::redis_bundle', u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'nova', u'file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config', u\"['Nova_cell_v2'].each |String $val| { noop_resource($val) }\\ninclude tripleo::profile::base::nova::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::conductor\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::consoleauth\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::vncproxy\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1', [u'/etc/iscsi:/etc/iscsi'], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'glance_api', u'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config', u'include ::tripleo::profile::base::glance::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'keystone', u'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', u\"['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::keystone\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,295 DEBUG: 28748 -- - [u'memcached', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::base::memcached\\n', u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'panko', u'file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config', u'include tripleo::profile::base::panko::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'heat', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'cinder', u'file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', u'include ::tripleo::profile::base::cinder::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::backup::ceph\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::lvm\\ninclude ::tripleo::profile::base::cinder::volume\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'swift', u'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', u'include ::tripleo::profile::base::swift::proxy\\n\\ninclude ::tripleo::profile::base::swift::storage\\n\\nclass xinetd() {}', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'haproxy', u'file,file_line,concat,augeas,cron,haproxy_config', u\"exec {'wait-for-settle': command => '/bin/true' }\\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::pacemaker::haproxy_bundle\", u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1', [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro'], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n\\ninclude ::tripleo::profile::base::ceilometer::agent::notification\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'rabbitmq', u'file,file_line,concat,augeas,cron,file', u\"['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::rabbitmq\\n\", u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include tripleo::profile::base::neutron::server\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude tripleo::profile::base::neutron::dhcp\\n\\ninclude tripleo::profile::base::neutron::l3\\n\\ninclude tripleo::profile::base::neutron::metadata\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'horizon', u'file,file_line,concat,augeas,cron,horizon_config', u'include ::tripleo::profile::base::horizon\\n', u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,296 DEBUG: 28748 -- - [u'heat_api_cfn', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api_cfn\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1', [], 0]", > "2018-10-02 14:45:58,296 INFO: 28748 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-10-02 14:45:58,308 INFO: 28750 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:45:58,308 INFO: 28749 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 14:45:58,308 DEBUG: 28750 -- config_volume swift_ringbuilder", > "2018-10-02 14:45:58,308 DEBUG: 28749 -- config_volume nova_placement", > "2018-10-02 14:45:58,308 DEBUG: 28750 -- puppet_tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-10-02 14:45:58,308 DEBUG: 28749 -- puppet_tags file,file_line,concat,augeas,cron,nova_config", > "2018-10-02 14:45:58,308 DEBUG: 28750 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-10-02 14:45:58,308 DEBUG: 28749 -- manifest include tripleo::profile::base::nova::placement", > "2018-10-02 14:45:58,308 DEBUG: 28750 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:45:58,308 INFO: 28751 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 14:45:58,308 DEBUG: 28749 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 14:45:58,308 DEBUG: 28750 -- volumes []", > "2018-10-02 14:45:58,308 DEBUG: 28749 -- volumes []", > "2018-10-02 14:45:58,308 DEBUG: 28751 -- config_volume gnocchi", > "2018-10-02 14:45:58,308 DEBUG: 28750 -- check_mode 0", > "2018-10-02 14:45:58,308 DEBUG: 28749 -- check_mode 0", > "2018-10-02 14:45:58,308 DEBUG: 28751 -- puppet_tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config", > "2018-10-02 14:45:58,309 DEBUG: 28751 -- manifest include ::tripleo::profile::base::gnocchi::api", > "include ::tripleo::profile::base::gnocchi::metricd", > "include ::tripleo::profile::base::gnocchi::statsd", > "2018-10-02 14:45:58,309 DEBUG: 28751 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 14:45:58,309 DEBUG: 28751 -- volumes []", > "2018-10-02 14:45:58,309 DEBUG: 28751 -- check_mode 0", > "2018-10-02 14:45:58,309 INFO: 28750 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-10-02 14:45:58,310 INFO: 28749 -- Removing container: docker-puppet-nova_placement", > "2018-10-02 14:45:58,310 INFO: 28751 -- Removing container: docker-puppet-gnocchi", > "2018-10-02 14:45:58,396 INFO: 28750 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:45:58,400 INFO: 28751 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 14:45:58,403 INFO: 28749 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 14:46:18,028 DEBUG: 28750 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "0f4899fadd7f: Pulling fs layer", > "ff59208988ad: Pulling fs layer", > "119515329f22: Pulling fs layer", > "9f313d6fc73a: Pulling fs layer", > "ff59208988ad: Waiting", > "119515329f22: Waiting", > "9f313d6fc73a: Waiting", > "e17262bc2341: Verifying Checksum", > "e17262bc2341: Download complete", > "ff59208988ad: Download complete", > "0f4899fadd7f: Verifying Checksum", > "0f4899fadd7f: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "119515329f22: Verifying Checksum", > "119515329f22: Download complete", > "9f313d6fc73a: Verifying Checksum", > "9f313d6fc73a: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "0f4899fadd7f: Pull complete", > "ff59208988ad: Pull complete", > "119515329f22: Pull complete", > "9f313d6fc73a: Pull complete", > "Digest: sha256:89819121606959e49721d100f1917a0698f37b8740a2f740eb6f20af29b481a8", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:46:18,034 DEBUG: 28750 -- NET_HOST enabled", > "2018-10-02 14:46:18,034 DEBUG: 28750 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift_ringbuilder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball --env NAME=swift_ringbuilder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp_4Kd_H:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:46:21,378 DEBUG: 28751 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-api", > "d0a704666261: Pulling fs layer", > "4df40fae1310: Pulling fs layer", > "d0a704666261: Waiting", > "4df40fae1310: Waiting", > "4df40fae1310: Verifying Checksum", > "4df40fae1310: Download complete", > "d0a704666261: Verifying Checksum", > "d0a704666261: Download complete", > "d0a704666261: Pull complete", > "4df40fae1310: Pull complete", > "Digest: sha256:a9c992ecf6a590d2d549ef59ef724604638a1918b26690ca0205ca6caf15c60b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 14:46:21,382 DEBUG: 28751 -- NET_HOST enabled", > "2018-10-02 14:46:21,382 DEBUG: 28751 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-gnocchi --env PUPPET_TAGS=file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config --env NAME=gnocchi --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpzBKMjd:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-26.1", > "2018-10-02 14:46:23,290 DEBUG: 28749 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-placement-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-placement-api", > "9e28a9d49d0f: Pulling fs layer", > "99145198ab24: Pulling fs layer", > "9e28a9d49d0f: Waiting", > "99145198ab24: Waiting", > "99145198ab24: Verifying Checksum", > "99145198ab24: Download complete", > "9e28a9d49d0f: Download complete", > "9e28a9d49d0f: Pull complete", > "99145198ab24: Pull complete", > "Digest: sha256:c8ad6dd93c095f7dc983f168d49fb64b51a827836b1522e9c06a5335ebdc70a4", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 14:46:23,293 DEBUG: 28749 -- NET_HOST enabled", > "2018-10-02 14:46:23,294 DEBUG: 28749 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_placement --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config --env NAME=nova_placement --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp26ZDgw:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-26.1", > "2018-10-02 14:46:34,609 DEBUG: 28750 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.15 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[fetch_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'", > "Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.22:%PORT%/d1]/Ring_object_device[172.17.4.22:6000/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.22:%PORT%/d1]/Ring_container_device[172.17.4.22:6001/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.22:%PORT%/d1]/Ring_account_device[172.17.4.22:6002/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[upload_swift_ring_tarball]: Triggered 'refresh' from 2 events", > "Notice: Applied catalog in 4.91 seconds", > "Changes:", > " Total: 11", > "Events:", > " Success: 11", > "Resources:", > " Changed: 11", > " Out of sync: 11", > " Skipped: 19", > " Total: 36", > " Restarted: 6", > "Time:", > " File: 0.00", > " Ring object device: 0.60", > " Ring account device: 0.62", > " Ring container device: 0.62", > " Config retrieval: 1.28", > " Exec: 1.56", > " Last run: 1538491593", > " Total: 4.70", > "Version:", > " Config: 1538491587", > " Puppet: 4.8.2", > "Gathering files modified after 2018-10-02 14:46:18.343504615 +0000", > "2018-10-02 14:46:34,609 DEBUG: 28750 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball'", > "+ CHECK_MODE=", > "+ '[' -d /tmp/puppet-check-mode ']'", > "+ origin_of_time=/var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ touch /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/ringbuilder.pp\", 113]:[\"/etc/config.pp\", 2]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/ringbuilder/create.pp\", 44]:", > "Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta", > "Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted", > "Warning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ rsync_srcs+=' /var/www'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift_ringbuilder", > "++ stat -c %y /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:18.343504615 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift_ringbuilder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift_ringbuilder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift_ringbuilder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ EXCLUDE='--exclude=*/etc/swift/backups/* --exclude=*/etc/swift/*.ring.gz --exclude=*/etc/swift/*.builder --exclude=*/etc/libvirt/passwd.db'", > "+ tar xO", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/swift_ringbuilder", > "+ sed '/^#.*HEADER.*/d'", > "+ md5sum", > "tar: Removing leading `/' from member names", > "+ awk '{print $1}'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/swift_ringbuilder --mtime=1970-01-01", > "2018-10-02 14:46:34,609 INFO: 28750 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-10-02 14:46:34,675 DEBUG: 28750 -- docker-puppet-swift_ringbuilder", > "2018-10-02 14:46:34,676 INFO: 28750 -- Finished processing puppet configs for swift_ringbuilder", > "2018-10-02 14:46:34,676 INFO: 28750 -- Starting configuration of sahara using image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 14:46:34,676 DEBUG: 28750 -- config_volume sahara", > "2018-10-02 14:46:34,676 DEBUG: 28750 -- puppet_tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-10-02 14:46:34,676 DEBUG: 28750 -- manifest include ::tripleo::profile::base::sahara::api", > "include ::tripleo::profile::base::sahara::engine", > "2018-10-02 14:46:34,676 DEBUG: 28750 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 14:46:34,676 DEBUG: 28750 -- volumes []", > "2018-10-02 14:46:34,676 DEBUG: 28750 -- check_mode 0", > "2018-10-02 14:46:34,677 INFO: 28750 -- Removing container: docker-puppet-sahara", > "2018-10-02 14:46:34,749 INFO: 28750 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 14:46:36,036 DEBUG: 28751 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.49 seconds", > "Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}9da85e58f3bd6c780ce76db603b7f028'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'", > "Notice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/ensure: defined content as '{md5}2421a3c6df32c7e38c2a7a22afdf5728'", > "Notice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}a045d750d819b1e9dae3fbfb3f20edd5'", > "Notice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'", > "Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{md5}f58b0483b70b4e73b5f67ff37b8f24a0'", > "Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{md5}fa95c477a2085c1f7f17ee5f8eccfb90'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Gnocchi::Db/Gnocchi_config[indexer/url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/auth_mode]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage/Gnocchi_config[storage/coordination_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/redis_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_keyring]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_pool]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_conffile]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/workers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/metric_processing_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/resource_id]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/archive_policy_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/flush_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/allow_methods]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Policy/Oslo::Policy[gnocchi_config]/Gnocchi_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Oslo::Middleware[gnocchi_config]/Gnocchi_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}3290418f393b5f27967a4637d01c782b'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}c6d1bc1fdbcb93bbd2596e4703f4108c' to '{md5}3bd0015a5b258bebc53d757643b45830'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'", > "Notice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'", > "Notice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'", > "Notice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'", > "Notice: /Stage[main]/Apache::Mod::Ext_filter/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'", > "Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'", > "Notice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'", > "Notice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'", > "Notice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'", > "Notice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'", > "Notice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'", > "Notice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'", > "Notice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'", > "Notice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'", > "Notice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'", > "Notice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'", > "Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'", > "Notice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'", > "Notice: /Stage[main]/Apache::Mod::Env/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'", > "Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'", > "Notice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'", > "Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{md5}c7726ef20347ef9a06ef68eeaad79765'", > "Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{md5}e282ac9f82fe5538692a4de3616fb695'", > "Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{md5}ab31a6ea611785f74851b578572e4157'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d/httpd.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed", > "Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/content: content changed '{md5}9e163ce201541f8aa36fcc1a372ed34d' to '{md5}b6f6f2773db25c777f1db887e7a3f57d'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-lua.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi.conf]/ensure: removed", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[/var/www/cgi-bin/gnocchi]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[gnocchi_wsgi]/ensure: defined content as '{md5}1001349fa771bd31f137b23418ebcced'", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/Apache::Vhost[gnocchi_wsgi]/Concat[10-gnocchi_wsgi.conf]/File[/etc/httpd/conf.d/10-gnocchi_wsgi.conf]/ensure: defined content as '{md5}4b26340c43a2574f28e5adf9342a5a67'", > "Notice: Applied catalog in 1.18 seconds", > " Total: 114", > " Success: 114", > " Changed: 114", > " Out of sync: 114", > " Total: 261", > " Skipped: 43", > " Concat file: 0.00", > " Anchor: 0.00", > " Concat fragment: 0.00", > " Augeas: 0.02", > " Gnocchi config: 0.28", > " File: 0.31", > " Last run: 1538491594", > " Config retrieval: 5.00", > " Total: 5.61", > " Resources: 0.00", > " Config: 1538491588", > "Gathering files modified after 2018-10-02 14:46:21.612532752 +0000", > "2018-10-02 14:46:36,036 DEBUG: 28751 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config'", > "+ origin_of_time=/var/lib/config-data/gnocchi.origin_of_time", > "+ touch /var/lib/config-data/gnocchi.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/db.pp\", 26]:[\"/etc/puppet/modules/gnocchi/manifests/init.pp\", 54]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/config.pp\", 29]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/gnocchi.pp\", 31]", > "Warning: Scope(Class[Gnocchi::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/gnocchi", > "++ stat -c %y /var/lib/config-data/gnocchi.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:21.612532752 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/gnocchi", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/gnocchi", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/gnocchi.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/gnocchi", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/gnocchi --mtime=1970-01-01", > "2018-10-02 14:46:36,036 INFO: 28751 -- Removing container: docker-puppet-gnocchi", > "2018-10-02 14:46:36,083 DEBUG: 28751 -- docker-puppet-gnocchi", > "2018-10-02 14:46:36,083 INFO: 28751 -- Finished processing puppet configs for gnocchi", > "2018-10-02 14:46:36,084 INFO: 28751 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:46:36,084 DEBUG: 28751 -- config_volume clustercheck", > "2018-10-02 14:46:36,084 DEBUG: 28751 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-10-02 14:46:36,084 DEBUG: 28751 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-10-02 14:46:36,084 DEBUG: 28751 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:46:36,084 DEBUG: 28751 -- volumes []", > "2018-10-02 14:46:36,084 DEBUG: 28751 -- check_mode 0", > "2018-10-02 14:46:36,086 INFO: 28751 -- Removing container: docker-puppet-clustercheck", > "2018-10-02 14:46:36,152 INFO: 28751 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:46:37,198 DEBUG: 28750 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-api", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "8699899a971e: Pulling fs layer", > "45d7e459b0ba: Pulling fs layer", > "45d7e459b0ba: Verifying Checksum", > "45d7e459b0ba: Download complete", > "8699899a971e: Verifying Checksum", > "8699899a971e: Download complete", > "8699899a971e: Pull complete", > "45d7e459b0ba: Pull complete", > "Digest: sha256:fde08aa97680215d52c978016470d6ab81eb3896ac0f9a038a7be67515f7ef00", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 14:46:37,201 DEBUG: 28750 -- NET_HOST enabled", > "2018-10-02 14:46:37,201 DEBUG: 28750 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-sahara --env PUPPET_TAGS=file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template --env NAME=sahara --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp2Y46uV:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-26.1", > "2018-10-02 14:46:42,875 DEBUG: 28751 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-mariadb ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-mariadb", > "86174678f419: Pulling fs layer", > "86174678f419: Verifying Checksum", > "86174678f419: Download complete", > "86174678f419: Pull complete", > "Digest: sha256:a18df92dad8491aa406a8a5075c976a71c5dff0af8c8ff75f0cb22355cc77f87", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:46:42,878 DEBUG: 28751 -- NET_HOST enabled", > "2018-10-02 14:46:42,878 DEBUG: 28751 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-clustercheck --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=clustercheck --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpnwB_Ga:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:46:46,176 DEBUG: 28749 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 5.11 seconds", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}e5237002bcaa52681a06018ecc3a097b'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/File[/etc/httpd/conf.d/00-nova-placement-api.conf]/content: content changed '{md5}611e31d39e1635bfabc0aafc51b43d0b' to '{md5}612d455490cfecc4b51db6656ea39240'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{md5}2c992c50344eb1765282cb9fb70126db'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{md5}87ccb1e3a759e10a84df786ce7a5d273'", > "Notice: Applied catalog in 8.19 seconds", > " Total: 132", > " Success: 132", > " Changed: 132", > " Out of sync: 132", > " Total: 375", > " Skipped: 39", > " Package: 0.11", > " File: 0.63", > " Total: 13.22", > " Last run: 1538491604", > " Config retrieval: 5.76", > " Nova config: 6.69", > " Config: 1538491590", > "Gathering files modified after 2018-10-02 14:46:23.487548787 +0000", > "2018-10-02 14:46:46,176 DEBUG: 28749 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova_placement.origin_of_time", > "+ touch /var/lib/config-data/nova_placement.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 551]", > "Warning: Scope(Class[Nova]): nova::use_syslog, nova::use_stderr, nova::log_facility, nova::log_dir \\", > "and nova::debug is deprecated and has been moved to nova::logging class, please set them there.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 561]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Scope(Class[Nova::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova_placement", > "++ stat -c %y /var/lib/config-data/nova_placement.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:23.487548787 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_placement", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_placement", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova_placement.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/nova_placement", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/nova_placement --mtime=1970-01-01", > "2018-10-02 14:46:46,176 INFO: 28749 -- Removing container: docker-puppet-nova_placement", > "2018-10-02 14:46:46,223 DEBUG: 28749 -- docker-puppet-nova_placement", > "2018-10-02 14:46:46,223 INFO: 28749 -- Finished processing puppet configs for nova_placement", > "2018-10-02 14:46:46,223 INFO: 28749 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 14:46:46,223 DEBUG: 28749 -- config_volume aodh", > "2018-10-02 14:46:46,223 DEBUG: 28749 -- puppet_tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config", > "2018-10-02 14:46:46,223 DEBUG: 28749 -- manifest include tripleo::profile::base::aodh::api", > "include tripleo::profile::base::aodh::evaluator", > "include tripleo::profile::base::aodh::listener", > "include tripleo::profile::base::aodh::notifier", > "2018-10-02 14:46:46,224 DEBUG: 28749 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 14:46:46,224 DEBUG: 28749 -- volumes []", > "2018-10-02 14:46:46,224 DEBUG: 28749 -- check_mode 0", > "2018-10-02 14:46:46,225 INFO: 28749 -- Removing container: docker-puppet-aodh", > "2018-10-02 14:46:46,287 INFO: 28749 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 14:46:48,344 DEBUG: 28749 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-api", > "70c8ade901ba: Pulling fs layer", > "e8ae5e32f329: Pulling fs layer", > "e8ae5e32f329: Download complete", > "70c8ade901ba: Verifying Checksum", > "70c8ade901ba: Download complete", > "70c8ade901ba: Pull complete", > "e8ae5e32f329: Pull complete", > "Digest: sha256:7cb294078a56b5adb50320b21f0f4d9dad0d2dc096d2f2b346ee686861589a46", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 14:46:48,348 DEBUG: 28749 -- NET_HOST enabled", > "2018-10-02 14:46:48,348 DEBUG: 28749 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-aodh --env PUPPET_TAGS=file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config --env NAME=aodh --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpobCzES:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-26.1", > "2018-10-02 14:46:49,400 DEBUG: 28750 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.18 seconds", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/plugins]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/port]/ensure: created", > "Notice: /Stage[main]/Sahara::Service::Api/Sahara_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Policy/Oslo::Policy[sahara_config]/Sahara_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Default[sahara_config]/Sahara_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Rabbit[sahara_config]/Sahara_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Zmq[sahara_config]/Sahara_config[DEFAULT/rpc_zmq_host]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 1.47 seconds", > " Total: 25", > " Success: 25", > " Total: 197", > " Skipped: 23", > " Out of sync: 25", > " Changed: 25", > " Package: 0.06", > " Sahara config: 1.09", > " Last run: 1538491608", > " Config retrieval: 2.49", > " Total: 3.66", > " Config: 1538491604", > "Gathering files modified after 2018-10-02 14:46:37.421664019 +0000", > "2018-10-02 14:46:49,401 DEBUG: 28750 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template'", > "+ origin_of_time=/var/lib/config-data/sahara.origin_of_time", > "+ touch /var/lib/config-data/sahara.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template /etc/config.pp", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/db.pp\", 69]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 380]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 381]", > "Warning: Scope(Class[Sahara]): The use_neutron parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Sahara]): sahara::admin_user, sahara::admin_password, sahara::auth_uri, sahara::identity_uri, sahara::admin_tenant_name and sahara::memcached_servers are deprecated. Please use sahara::keystone::authtoken::* parameters instead.", > "Warning: Scope(Class[Sahara::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/sahara", > "++ stat -c %y /var/lib/config-data/sahara.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:37.421664019 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/sahara", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/sahara", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/sahara.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/sahara", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/sahara --mtime=1970-01-01", > "2018-10-02 14:46:49,401 INFO: 28750 -- Removing container: docker-puppet-sahara", > "2018-10-02 14:46:49,437 DEBUG: 28750 -- docker-puppet-sahara", > "2018-10-02 14:46:49,438 INFO: 28750 -- Finished processing puppet configs for sahara", > "2018-10-02 14:46:49,438 INFO: 28750 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:46:49,438 DEBUG: 28750 -- config_volume mysql", > "2018-10-02 14:46:49,438 DEBUG: 28750 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-10-02 14:46:49,438 DEBUG: 28750 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "2018-10-02 14:46:49,438 DEBUG: 28750 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:46:49,438 DEBUG: 28750 -- volumes []", > "2018-10-02 14:46:49,438 DEBUG: 28750 -- check_mode 0", > "2018-10-02 14:46:49,439 INFO: 28750 -- Removing container: docker-puppet-mysql", > "2018-10-02 14:46:49,488 INFO: 28750 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:46:49,491 DEBUG: 28750 -- NET_HOST enabled", > "2018-10-02 14:46:49,491 DEBUG: 28750 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-mysql --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=mysql --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpikZRoN:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-26.1", > "2018-10-02 14:46:51,163 DEBUG: 28751 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.46 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}1bc5e3299c4a59a964cc16e21cad1919'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}7d37008224e71625019cb48768f267e7'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/Xinetd::Service[galera-monitor]/File[/etc/xinetd.d/galera-monitor]/ensure: defined content as '{md5}a48c0f33532999b563dcb8f6cfc08135'", > "Notice: Applied catalog in 0.04 seconds", > " Total: 4", > " Success: 4", > " Total: 13", > " Out of sync: 3", > " Changed: 3", > " Skipped: 9", > " File: 0.02", > " Config retrieval: 0.57", > " Total: 0.60", > " Last run: 1538491610", > " Config: 1538491609", > "Gathering files modified after 2018-10-02 14:46:43.070708891 +0000", > "2018-10-02 14:46:51,164 DEBUG: 28751 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,file ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,file'", > "+ origin_of_time=/var/lib/config-data/clustercheck.origin_of_time", > "+ touch /var/lib/config-data/clustercheck.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,file /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/clustercheck", > "++ stat -c %y /var/lib/config-data/clustercheck.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:43.070708891 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/clustercheck", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/clustercheck", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/clustercheck.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/clustercheck", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/clustercheck --mtime=1970-01-01", > "2018-10-02 14:46:51,164 INFO: 28751 -- Removing container: docker-puppet-clustercheck", > "2018-10-02 14:46:51,201 DEBUG: 28751 -- docker-puppet-clustercheck", > "2018-10-02 14:46:51,202 INFO: 28751 -- Finished processing puppet configs for clustercheck", > "2018-10-02 14:46:51,202 INFO: 28751 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 14:46:51,202 DEBUG: 28751 -- config_volume redis", > "2018-10-02 14:46:51,202 DEBUG: 28751 -- puppet_tags file,file_line,concat,augeas,cron,exec", > "2018-10-02 14:46:51,202 DEBUG: 28751 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-10-02 14:46:51,202 DEBUG: 28751 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 14:46:51,202 DEBUG: 28751 -- volumes []", > "2018-10-02 14:46:51,202 DEBUG: 28751 -- check_mode 0", > "2018-10-02 14:46:51,203 INFO: 28751 -- Removing container: docker-puppet-redis", > "2018-10-02 14:46:51,263 INFO: 28751 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 14:46:54,822 DEBUG: 28751 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-redis ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-redis", > "b76c66c936ee: Pulling fs layer", > "edac33389285: Pulling fs layer", > "b76c66c936ee: Download complete", > "b76c66c936ee: Pull complete", > "edac33389285: Verifying Checksum", > "edac33389285: Download complete", > "edac33389285: Pull complete", > "Digest: sha256:8e75aa16fb47a7f685c996ceb37a84a6316a68a11a07f1c66b48117600612b2e", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 14:46:54,825 DEBUG: 28751 -- NET_HOST enabled", > "2018-10-02 14:46:54,825 DEBUG: 28751 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-redis --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec --env NAME=redis --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp6O1M8I:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-26.1", > "2018-10-02 14:47:02,167 DEBUG: 28750 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.28 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}a730a65a0efef3097d49f2084ff2db3e'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}76a4e05ad880b930b43fc47f1d505711'", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}19d4164404171cb9dfab8aa9d4fff40b'", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Notice: Applied catalog in 0.36 seconds", > " Skipped: 225", > " Total: 230", > " Out of sync: 4", > " Changed: 4", > " File: 0.03", > " Last run: 1538491621", > " Config retrieval: 4.64", > " Total: 4.67", > " Config: 1538491616", > "Gathering files modified after 2018-10-02 14:46:49.696760380 +0000", > "2018-10-02 14:47:02,167 DEBUG: 28750 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/mysql.origin_of_time", > "+ touch /var/lib/config-data/mysql.origin_of_time", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 57]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/mysql", > "++ stat -c %y /var/lib/config-data/mysql.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:49.696760380 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/mysql", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/mysql", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/mysql.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/mysql", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/mysql --mtime=1970-01-01", > "2018-10-02 14:47:02,167 INFO: 28750 -- Removing container: docker-puppet-mysql", > "2018-10-02 14:47:02,204 DEBUG: 28750 -- docker-puppet-mysql", > "2018-10-02 14:47:02,205 INFO: 28750 -- Finished processing puppet configs for mysql", > "2018-10-02 14:47:02,205 INFO: 28750 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 14:47:02,205 DEBUG: 28750 -- config_volume nova", > "2018-10-02 14:47:02,205 DEBUG: 28750 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config", > "2018-10-02 14:47:02,205 DEBUG: 28750 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::conductor", > "include tripleo::profile::base::nova::consoleauth", > "include tripleo::profile::base::nova::scheduler", > "include tripleo::profile::base::nova::vncproxy", > "2018-10-02 14:47:02,205 DEBUG: 28750 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 14:47:02,205 DEBUG: 28750 -- volumes []", > "2018-10-02 14:47:02,206 DEBUG: 28750 -- check_mode 0", > "2018-10-02 14:47:02,207 INFO: 28750 -- Removing container: docker-puppet-nova", > "2018-10-02 14:47:02,270 INFO: 28750 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 14:47:03,082 DEBUG: 28751 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.99 seconds", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}0e2bac058facfc94edce0012e38554de'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Triggered 'refresh' from 1 events", > "Notice: Applied catalog in 0.07 seconds", > " Total: 6", > " Success: 6", > " Restarted: 1", > " Skipped: 11", > " Total: 21", > " Out of sync: 6", > " Changed: 6", > " Exec: 0.00", > " Augeas: 0.01", > " File: 0.01", > " Config retrieval: 1.12", > " Total: 1.14", > " Last run: 1538491622", > " Config: 1538491621", > "Gathering files modified after 2018-10-02 14:46:55.022800773 +0000", > "2018-10-02 14:47:03,082 DEBUG: 28751 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,exec ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec'", > "+ origin_of_time=/var/lib/config-data/redis.origin_of_time", > "+ touch /var/lib/config-data/redis.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec /etc/config.pp", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/redis", > "++ stat -c %y /var/lib/config-data/redis.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:55.022800773 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/redis", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/redis", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/redis.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/redis", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/redis --mtime=1970-01-01", > "2018-10-02 14:47:03,082 INFO: 28751 -- Removing container: docker-puppet-redis", > "2018-10-02 14:47:03,120 DEBUG: 28751 -- docker-puppet-redis", > "2018-10-02 14:47:03,121 INFO: 28751 -- Finished processing puppet configs for redis", > "2018-10-02 14:47:03,121 INFO: 28751 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 14:47:03,121 DEBUG: 28751 -- config_volume keystone", > "2018-10-02 14:47:03,121 DEBUG: 28751 -- puppet_tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config", > "2018-10-02 14:47:03,121 DEBUG: 28751 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "2018-10-02 14:47:03,121 DEBUG: 28751 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 14:47:03,121 DEBUG: 28751 -- volumes []", > "2018-10-02 14:47:03,121 DEBUG: 28751 -- check_mode 0", > "2018-10-02 14:47:03,123 INFO: 28751 -- Removing container: docker-puppet-keystone", > "2018-10-02 14:47:03,195 INFO: 28751 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 14:47:04,019 DEBUG: 28749 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.58 seconds", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/gnocchi_external_project_owner]/ensure: created", > "Notice: /Stage[main]/Aodh::Evaluator/Aodh_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Db/Oslo::Db[aodh_config]/Aodh_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Rabbit[aodh_config]/Aodh_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Default[aodh_config]/Aodh_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Policy/Oslo::Policy[aodh_config]/Aodh_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Oslo::Middleware[aodh_config]/Aodh_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}cc1afad9a1f5c6f637a9a710dcceaa74'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/owner: owner changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/group: group changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[aodh_wsgi]/ensure: defined content as '{md5}09d823939c45501c11f2096289fe70cf'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/Apache::Vhost[aodh_wsgi]/Concat[10-aodh_wsgi.conf]/File[/etc/httpd/conf.d/10-aodh_wsgi.conf]/ensure: defined content as '{md5}15a12d6d0ff8b2630073a3ea89425ca3'", > "Notice: Applied catalog in 1.90 seconds", > " Total: 110", > " Success: 110", > " Changed: 109", > " Out of sync: 109", > " Total: 329", > " Skipped: 40", > " Package: 0.05", > " File: 0.37", > " Aodh config: 0.77", > " Config retrieval: 5.23", > " Total: 6.44", > " Config: 1538491615", > "Gathering files modified after 2018-10-02 14:46:48.577751792 +0000", > "2018-10-02 14:47:04,019 DEBUG: 28749 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config'", > "+ origin_of_time=/var/lib/config-data/aodh.origin_of_time", > "+ touch /var/lib/config-data/aodh.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config /etc/config.pp", > "Warning: Unknown variable: 'undef'. at /etc/puppet/modules/aodh/manifests/init.pp:290:41", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/aodh.pp\", 123]", > "Warning: Scope(Class[Aodh::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Scope(Class[Aodh::Api]): host has no effect as of Newton and will be removed in a future \\", > "release. aodh::wsgi::apache supports setting a host via bind_host.", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/oslo/manifests/db.pp\", 132]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/aodh", > "++ stat -c %y /var/lib/config-data/aodh.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:46:48.577751792 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/aodh", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/aodh", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/aodh.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/aodh", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/aodh --mtime=1970-01-01", > "2018-10-02 14:47:04,019 INFO: 28749 -- Removing container: docker-puppet-aodh", > "2018-10-02 14:47:04,073 DEBUG: 28749 -- docker-puppet-aodh", > "2018-10-02 14:47:04,073 INFO: 28749 -- Finished processing puppet configs for aodh", > "2018-10-02 14:47:04,073 INFO: 28749 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:47:04,073 DEBUG: 28749 -- config_volume heat_api", > "2018-10-02 14:47:04,073 DEBUG: 28749 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-10-02 14:47:04,074 DEBUG: 28749 -- manifest include ::tripleo::profile::base::heat::api", > "2018-10-02 14:47:04,074 DEBUG: 28749 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:47:04,074 DEBUG: 28749 -- volumes []", > "2018-10-02 14:47:04,074 DEBUG: 28749 -- check_mode 0", > "2018-10-02 14:47:04,076 INFO: 28749 -- Removing container: docker-puppet-heat_api", > "2018-10-02 14:47:04,144 INFO: 28749 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:47:05,645 DEBUG: 28750 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-api", > "9e28a9d49d0f: Already exists", > "73c834b98c25: Pulling fs layer", > "73c834b98c25: Verifying Checksum", > "73c834b98c25: Download complete", > "73c834b98c25: Pull complete", > "Digest: sha256:0e5b7e3cf3455a72f25bf23e2d3e15f27add32743545241aa8a5bfd77559bf24", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 14:47:05,648 DEBUG: 28750 -- NET_HOST enabled", > "2018-10-02 14:47:05,649 DEBUG: 28750 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config --env NAME=nova --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpMnV666:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-26.1", > "2018-10-02 14:47:05,849 DEBUG: 28751 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-keystone ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-keystone", > "3bcc3bbd3f17: Pulling fs layer", > "016b47c04c8c: Pulling fs layer", > "016b47c04c8c: Verifying Checksum", > "016b47c04c8c: Download complete", > "3bcc3bbd3f17: Verifying Checksum", > "3bcc3bbd3f17: Download complete", > "3bcc3bbd3f17: Pull complete", > "016b47c04c8c: Pull complete", > "Digest: sha256:b8a47f5ce80ead2c8816fa3b237a5130565a3aea7bf0be3269d3c9d7867aff62", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 14:47:05,852 DEBUG: 28751 -- NET_HOST enabled", > "2018-10-02 14:47:05,852 DEBUG: 28751 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-keystone --env PUPPET_TAGS=file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config --env NAME=keystone --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpREbnFN:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-26.1", > "2018-10-02 14:47:06,588 DEBUG: 28749 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api", > "d1bf34aac9d8: Pulling fs layer", > "1075fd166a56: Pulling fs layer", > "1075fd166a56: Verifying Checksum", > "1075fd166a56: Download complete", > "d1bf34aac9d8: Download complete", > "d1bf34aac9d8: Pull complete", > "1075fd166a56: Pull complete", > "Digest: sha256:e59baeac763341b8b2bab7f2bfbc4548e3ae4f38bc44046eb338d52d8eabf102", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:47:06,591 DEBUG: 28749 -- NET_HOST enabled", > "2018-10-02 14:47:06,591 DEBUG: 28749 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp82ggzJ:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:47:20,906 DEBUG: 28751 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.01 seconds", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/notification_format]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/admin_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/public_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/0]/ensure: defined content as '{md5}174f565d793cba22a7a62ea64aeecbaa'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/1]/ensure: defined content as '{md5}21684b83b8a5b2d63c2032d127edc99b'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/0]/ensure: defined content as '{md5}fcbccf7248c45248286edb723591fd28'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/1]/ensure: defined content as '{md5}bca1121fd3cf5606e490ededb392ad06'", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone::Config/Keystone_config[ec2/driver]/ensure: created", > "Notice: /Stage[main]/Keystone::Cron::Token_flush/Cron[keystone-manage token_flush]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Default[keystone_config]/Keystone_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}31a000b2513ac2033d315d1ae0706328'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/File[keystone_wsgi_main]/ensure: defined content as '{md5}072422f0d75777ed1783e6910b3ddc58'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/File[keystone_wsgi_admin]/ensure: defined content as '{md5}d6dda52b0e14d80a652ecf42686d3962'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_mellon.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_openidc.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_mellon.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_openidc.conf]/ensure: removed", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/Apache::Vhost[keystone_wsgi_main]/Concat[10-keystone_wsgi_main.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_main.conf]/ensure: defined content as '{md5}74c73d44870978b6247956366aa95bc3'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/Apache::Vhost[keystone_wsgi_admin]/Concat[10-keystone_wsgi_admin.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_admin.conf]/ensure: defined content as '{md5}f3648a02806a430f97a24c380c6a9710'", > "Notice: Applied catalog in 2.29 seconds", > " Total: 126", > " Success: 126", > " Changed: 126", > " Out of sync: 126", > " Total: 324", > " Skipped: 34", > " Cron: 0.01", > " File: 0.24", > " Keystone config: 1.40", > " Last run: 1538491639", > " Config retrieval: 4.53", > " Total: 6.25", > " Config: 1538491632", > "Gathering files modified after 2018-10-02 14:47:06.728886758 +0000", > "2018-10-02 14:47:20,906 DEBUG: 28751 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config'", > "+ origin_of_time=/var/lib/config-data/keystone.origin_of_time", > "+ touch /var/lib/config-data/keystone.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/keystone/manifests/init.pp\", 757]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 760]:[\"/etc/config.pp\", 3]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 1108]:[\"/etc/config.pp\", 3]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/keystone", > "++ stat -c %y /var/lib/config-data/keystone.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:06.728886758 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/keystone", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/keystone", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/keystone.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/keystone", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/keystone --mtime=1970-01-01", > "2018-10-02 14:47:20,906 INFO: 28751 -- Removing container: docker-puppet-keystone", > "2018-10-02 14:47:20,949 DEBUG: 28751 -- docker-puppet-keystone", > "2018-10-02 14:47:20,949 INFO: 28751 -- Finished processing puppet configs for keystone", > "2018-10-02 14:47:20,950 INFO: 28751 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 14:47:20,950 DEBUG: 28751 -- config_volume memcached", > "2018-10-02 14:47:20,950 DEBUG: 28751 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-10-02 14:47:20,950 DEBUG: 28751 -- manifest include ::tripleo::profile::base::memcached", > "2018-10-02 14:47:20,950 DEBUG: 28751 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 14:47:20,950 DEBUG: 28751 -- volumes []", > "2018-10-02 14:47:20,950 DEBUG: 28751 -- check_mode 0", > "2018-10-02 14:47:20,952 INFO: 28751 -- Removing container: docker-puppet-memcached", > "2018-10-02 14:47:21,006 INFO: 28751 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 14:47:21,020 DEBUG: 28749 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.86 seconds", > "Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[clients_keystone/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created", > "Notice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}c7f5b19accfbd3e3e1d18e458cf78bf6'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{md5}640891728ce5d46ae40234228561597c'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{md5}7c6507254a7fb1da2b9e55353e53ac0a'", > "Notice: Applied catalog in 2.24 seconds", > " Total: 121", > " Success: 121", > " Changed: 121", > " Out of sync: 121", > " Skipped: 32", > " Total: 336", > " Package: 0.12", > " File: 0.28", > " Heat config: 1.27", > " Config retrieval: 4.34", > " Total: 6.01", > "Gathering files modified after 2018-10-02 14:47:06.789887195 +0000", > "2018-10-02 14:47:21,020 DEBUG: 28749 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,heat_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/heat_api.origin_of_time", > "+ touch /var/lib/config-data/heat_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/db.pp\", 75]:[\"/etc/puppet/modules/heat/manifests/init.pp\", 363]", > "Warning: Scope(Class[Heat::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/heat.pp\", 128]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api", > "++ stat -c %y /var/lib/config-data/heat_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:06.789887195 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/heat_api", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/heat_api --mtime=1970-01-01", > "2018-10-02 14:47:21,021 INFO: 28749 -- Removing container: docker-puppet-heat_api", > "2018-10-02 14:47:21,069 DEBUG: 28749 -- docker-puppet-heat_api", > "2018-10-02 14:47:21,070 INFO: 28749 -- Finished processing puppet configs for heat_api", > "2018-10-02 14:47:21,070 INFO: 28749 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:47:21,070 DEBUG: 28749 -- config_volume heat", > "2018-10-02 14:47:21,070 DEBUG: 28749 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-10-02 14:47:21,070 DEBUG: 28749 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-10-02 14:47:21,070 DEBUG: 28749 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:47:21,070 DEBUG: 28749 -- volumes []", > "2018-10-02 14:47:21,070 DEBUG: 28749 -- check_mode 0", > "2018-10-02 14:47:21,072 INFO: 28749 -- Removing container: docker-puppet-heat", > "2018-10-02 14:47:21,123 INFO: 28749 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:47:21,126 DEBUG: 28749 -- NET_HOST enabled", > "2018-10-02 14:47:21,126 DEBUG: 28749 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmps595Lu:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-26.1", > "2018-10-02 14:47:22,562 DEBUG: 28751 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-memcached ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-memcached", > "13f6871ba653: Pulling fs layer", > "13f6871ba653: Verifying Checksum", > "13f6871ba653: Download complete", > "13f6871ba653: Pull complete", > "Digest: sha256:b85a55179015e133b7b42af8fad710e1b8f960cf126d9fef1750a2af97c849ab", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 14:47:22,565 DEBUG: 28751 -- NET_HOST enabled", > "2018-10-02 14:47:22,566 DEBUG: 28751 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-memcached --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=memcached --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp8p1cIx:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-26.1", > "2018-10-02 14:47:30,460 DEBUG: 28751 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.60 seconds", > "Notice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}a50ed62e82d31fb4cb2de2226650c545' to '{md5}7360419ba2f385d827e99bd7f4389bf6'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d/memcached.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > " Total: 3", > " Success: 3", > " Skipped: 10", > " Config retrieval: 0.70", > " Total: 0.72", > " Last run: 1538491649", > " Config: 1538491649", > "Gathering files modified after 2018-10-02 14:47:22.759998408 +0000", > "2018-10-02 14:47:30,460 DEBUG: 28751 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/memcached.origin_of_time", > "+ touch /var/lib/config-data/memcached.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/memcached", > "++ stat -c %y /var/lib/config-data/memcached.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:22.759998408 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/memcached", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/memcached", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/memcached.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/memcached", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/memcached --mtime=1970-01-01", > "2018-10-02 14:47:30,460 INFO: 28751 -- Removing container: docker-puppet-memcached", > "2018-10-02 14:47:30,494 DEBUG: 28751 -- docker-puppet-memcached", > "2018-10-02 14:47:30,494 INFO: 28751 -- Finished processing puppet configs for memcached", > "2018-10-02 14:47:30,494 INFO: 28751 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 14:47:30,495 DEBUG: 28751 -- config_volume panko", > "2018-10-02 14:47:30,495 DEBUG: 28751 -- puppet_tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config", > "2018-10-02 14:47:30,495 DEBUG: 28751 -- manifest include tripleo::profile::base::panko::api", > "2018-10-02 14:47:30,495 DEBUG: 28751 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 14:47:30,495 DEBUG: 28751 -- volumes []", > "2018-10-02 14:47:30,495 DEBUG: 28751 -- check_mode 0", > "2018-10-02 14:47:30,496 INFO: 28751 -- Removing container: docker-puppet-panko", > "2018-10-02 14:47:30,558 INFO: 28751 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 14:47:31,638 DEBUG: 28750 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 5.16 seconds", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}8e60c90b63742c078fdadaa52995f5c0'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{md5}8bcfb466d72544dd31a4f339243ed669'", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/instance_name_template]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/fping_path]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/driver]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/discover_hosts_in_cells_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/max_attempts]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_io_ops_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_instances_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_port]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/auth_schemes]/ensure: created", > "Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Purge_shadow_tables/Cron[nova-manage db purge]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{md5}96514bcfef7033c02b0b9682d47744b5'", > "Notice: Applied catalog in 10.82 seconds", > " Total: 179", > " Success: 179", > " Changed: 179", > " Out of sync: 179", > " Total: 504", > " Skipped: 75", > " Cron: 0.02", > " Package: 0.09", > " File: 0.29", > " Total: 15.81", > " Config retrieval: 5.85", > " Nova config: 9.53", > "Gathering files modified after 2018-10-02 14:47:05.873880628 +0000", > "2018-10-02 14:47:31,638 DEBUG: 28750 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova.origin_of_time", > "+ touch /var/lib/config-data/nova.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 97]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 561]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 97]", > "Warning: Scope(Class[Nova::Api]): Running nova metadata api via evenlet is deprecated and will be removed in Stein release.", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/nova/manifests/scheduler/filter.pp\", 150]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/scheduler.pp\", 32]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova", > "++ stat -c %y /var/lib/config-data/nova.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:05.873880628 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/nova", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/nova --mtime=1970-01-01", > "2018-10-02 14:47:31,638 INFO: 28750 -- Removing container: docker-puppet-nova", > "2018-10-02 14:47:31,694 DEBUG: 28750 -- docker-puppet-nova", > "2018-10-02 14:47:31,694 INFO: 28750 -- Finished processing puppet configs for nova", > "2018-10-02 14:47:31,694 INFO: 28750 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:47:31,694 DEBUG: 28750 -- config_volume iscsid", > "2018-10-02 14:47:31,694 DEBUG: 28750 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-10-02 14:47:31,694 DEBUG: 28750 -- manifest include ::tripleo::profile::base::iscsid", > "2018-10-02 14:47:31,695 DEBUG: 28750 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:47:31,695 DEBUG: 28750 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-10-02 14:47:31,695 DEBUG: 28750 -- check_mode 0", > "2018-10-02 14:47:31,696 INFO: 28750 -- Removing container: docker-puppet-iscsid", > "2018-10-02 14:47:31,760 INFO: 28750 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:47:32,394 DEBUG: 28750 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "2afcd4790b43: Pulling fs layer", > "2afcd4790b43: Verifying Checksum", > "2afcd4790b43: Download complete", > "2afcd4790b43: Pull complete", > "Digest: sha256:b516e920a95255994d6493d4a922af867754e570e2afe8afeaa5c2f3e25a6d94", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:47:32,397 DEBUG: 28750 -- NET_HOST enabled", > "2018-10-02 14:47:32,397 DEBUG: 28750 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpRbiOTj:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-26.1", > "2018-10-02 14:47:33,103 DEBUG: 28751 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-panko-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-panko-api", > "8eabf556166e: Pulling fs layer", > "884a4a0b0967: Pulling fs layer", > "884a4a0b0967: Download complete", > "8eabf556166e: Verifying Checksum", > "8eabf556166e: Download complete", > "8eabf556166e: Pull complete", > "884a4a0b0967: Pull complete", > "Digest: sha256:7bfddde03ab9169a2eb08c712adc74c27bb8971d4823a46dbb41e3525c2f000b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 14:47:33,106 DEBUG: 28751 -- NET_HOST enabled", > "2018-10-02 14:47:33,106 DEBUG: 28751 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-panko --env PUPPET_TAGS=file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config --env NAME=panko --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp97Fif4:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-26.1", > "2018-10-02 14:47:33,477 DEBUG: 28749 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.31 seconds", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created", > "Notice: Applied catalog in 1.97 seconds", > " Total: 48", > " Success: 48", > " Skipped: 21", > " Total: 223", > " Out of sync: 48", > " Changed: 48", > " Augeas: 0.03", > " Heat config: 1.55", > " Last run: 1538491652", > " Config retrieval: 2.54", > " Total: 4.18", > " Config: 1538491647", > "Gathering files modified after 2018-10-02 14:47:21.297988474 +0000", > "2018-10-02 14:47:33,477 DEBUG: 28749 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat.origin_of_time", > "+ touch /var/lib/config-data/heat.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat", > "++ stat -c %y /var/lib/config-data/heat.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:21.297988474 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/heat", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/heat --mtime=1970-01-01", > "2018-10-02 14:47:33,477 INFO: 28749 -- Removing container: docker-puppet-heat", > "2018-10-02 14:47:33,516 DEBUG: 28749 -- docker-puppet-heat", > "2018-10-02 14:47:33,516 INFO: 28749 -- Finished processing puppet configs for heat", > "2018-10-02 14:47:33,516 INFO: 28749 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 14:47:33,516 DEBUG: 28749 -- config_volume cinder", > "2018-10-02 14:47:33,516 DEBUG: 28749 -- puppet_tags file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line", > "2018-10-02 14:47:33,516 DEBUG: 28749 -- manifest include ::tripleo::profile::base::cinder::api", > "include ::tripleo::profile::base::cinder::backup::ceph", > "include ::tripleo::profile::base::cinder::scheduler", > "include ::tripleo::profile::base::lvm", > "2018-10-02 14:47:33,516 DEBUG: 28749 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 14:47:33,517 DEBUG: 28749 -- volumes []", > "2018-10-02 14:47:33,517 DEBUG: 28749 -- check_mode 0", > "2018-10-02 14:47:33,518 INFO: 28749 -- Removing container: docker-puppet-cinder", > "2018-10-02 14:47:33,575 INFO: 28749 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 14:47:40,501 DEBUG: 28750 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.51 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > " Total: 2", > " Success: 2", > " Total: 10", > " Out of sync: 2", > " Changed: 2", > " Skipped: 8", > " Exec: 0.02", > " Total: 0.59", > " Last run: 1538491659", > " Config: 1538491659", > "Gathering files modified after 2018-10-02 14:47:32.653063925 +0000", > "2018-10-02 14:47:40,501 DEBUG: 28750 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:32.653063925 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/iscsid", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-10-02 14:47:40,501 INFO: 28750 -- Removing container: docker-puppet-iscsid", > "2018-10-02 14:47:40,544 DEBUG: 28750 -- docker-puppet-iscsid", > "2018-10-02 14:47:40,545 INFO: 28750 -- Finished processing puppet configs for iscsid", > "2018-10-02 14:47:40,545 INFO: 28750 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 14:47:40,545 DEBUG: 28750 -- config_volume glance_api", > "2018-10-02 14:47:40,545 DEBUG: 28750 -- puppet_tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-10-02 14:47:40,545 DEBUG: 28750 -- manifest include ::tripleo::profile::base::glance::api", > "2018-10-02 14:47:40,545 DEBUG: 28750 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 14:47:40,545 DEBUG: 28750 -- volumes []", > "2018-10-02 14:47:40,545 DEBUG: 28750 -- check_mode 0", > "2018-10-02 14:47:40,546 INFO: 28750 -- Removing container: docker-puppet-glance_api", > "2018-10-02 14:47:40,620 INFO: 28750 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 14:47:41,995 DEBUG: 28749 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-api", > "58cfa97883f0: Pulling fs layer", > "ddff537686ab: Pulling fs layer", > "ddff537686ab: Verifying Checksum", > "ddff537686ab: Download complete", > "58cfa97883f0: Verifying Checksum", > "58cfa97883f0: Download complete", > "58cfa97883f0: Pull complete", > "ddff537686ab: Pull complete", > "Digest: sha256:ad06296168f9f7818d054cba160af0406be642f4622b2b267bf10e014843aa37", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 14:47:41,999 DEBUG: 28749 -- NET_HOST enabled", > "2018-10-02 14:47:41,999 DEBUG: 28749 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-cinder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line --env NAME=cinder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpxlT99p:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-26.1", > "2018-10-02 14:47:46,300 DEBUG: 28751 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.12 seconds", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/host]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/port]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/workers]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_api_paste_ini[pipeline:main/pipeline]/ensure: created", > "Notice: /Stage[main]/Panko::Expirer/Cron[panko-expirer]/ensure: created", > "Notice: /Stage[main]/Panko::Logging/Oslo::Log[panko_config]/Panko_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Panko::Db/Oslo::Db[panko_config]/Panko_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Panko::Policy/Oslo::Policy[panko_config]/Panko_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Oslo::Middleware[panko_config]/Panko_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}42ecc58ce65883cdef0191f8e567387e'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[/var/www/cgi-bin/panko]/ensure: created", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[panko_wsgi]/ensure: defined content as '{md5}e6f446b6267321fd2251a3e83021181a'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/Apache::Vhost[panko_wsgi]/Concat[10-panko_wsgi.conf]/File[/etc/httpd/conf.d/10-panko_wsgi.conf]/ensure: defined content as '{md5}77819f8f080e4af0f2b23b16f870d319'", > "Notice: Applied catalog in 1.10 seconds", > " Total: 101", > " Success: 101", > " Changed: 101", > " Out of sync: 101", > " Total: 256", > " Panko api paste ini: 0.00", > " Panko config: 0.14", > " File: 0.34", > " Last run: 1538491664", > " Total: 5.21", > "Gathering files modified after 2018-10-02 14:47:33.317068240 +0000", > "2018-10-02 14:47:46,300 DEBUG: 28751 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config'", > "+ origin_of_time=/var/lib/config-data/panko.origin_of_time", > "+ touch /var/lib/config-data/panko.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko.pp\", 32]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/db.pp\", 59]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko/api.pp\", 83]", > "Warning: Scope(Class[Panko::Api]): This Class is deprecated and will be removed in future releases.", > "Warning: Scope(Class[Panko::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/panko", > "++ stat -c %y /var/lib/config-data/panko.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:33.317068240 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/panko", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/panko", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/panko.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/panko", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/panko --mtime=1970-01-01", > "2018-10-02 14:47:46,300 INFO: 28751 -- Removing container: docker-puppet-panko", > "2018-10-02 14:47:46,345 DEBUG: 28751 -- docker-puppet-panko", > "2018-10-02 14:47:46,345 INFO: 28751 -- Finished processing puppet configs for panko", > "2018-10-02 14:47:46,345 INFO: 28751 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:47:46,346 DEBUG: 28751 -- config_volume crond", > "2018-10-02 14:47:46,346 DEBUG: 28751 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-10-02 14:47:46,346 DEBUG: 28751 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-10-02 14:47:46,346 DEBUG: 28751 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:47:46,346 DEBUG: 28751 -- volumes []", > "2018-10-02 14:47:46,346 DEBUG: 28751 -- check_mode 0", > "2018-10-02 14:47:46,347 INFO: 28751 -- Removing container: docker-puppet-crond", > "2018-10-02 14:47:46,409 INFO: 28751 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:47:46,962 DEBUG: 28751 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "4d80de3c75a6: Pulling fs layer", > "4d80de3c75a6: Download complete", > "4d80de3c75a6: Pull complete", > "Digest: sha256:d7abfe49c737904a24b4da901cd357c6a9aba94959e6be50bdb830a6a32fec7b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:47:46,965 DEBUG: 28751 -- NET_HOST enabled", > "2018-10-02 14:47:46,965 DEBUG: 28751 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpARjLtb:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-26.1", > "2018-10-02 14:47:47,588 DEBUG: 28750 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-glance-api ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-glance-api", > "07f9f19afd91: Pulling fs layer", > "0a5772a6be1c: Pulling fs layer", > "0a5772a6be1c: Verifying Checksum", > "0a5772a6be1c: Download complete", > "07f9f19afd91: Download complete", > "07f9f19afd91: Pull complete", > "0a5772a6be1c: Pull complete", > "Digest: sha256:69eb9af199d6572ba1406843685ec68dab3eeb943513ce161d1fb81714f2fc6a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 14:47:47,592 DEBUG: 28750 -- NET_HOST enabled", > "2018-10-02 14:47:47,592 DEBUG: 28750 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-glance_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config --env NAME=glance_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmplHcbDe:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-26.1", > "2018-10-02 14:47:54,755 DEBUG: 28751 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.43 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}f121ac457cb6e71964450c8cbc0a2431'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.03 seconds", > " Skipped: 7", > " Total: 9", > " Config retrieval: 0.52", > " Total: 0.53", > " Last run: 1538491674", > " Config: 1538491673", > "Gathering files modified after 2018-10-02 14:47:47.164155531 +0000", > "2018-10-02 14:47:54,755 DEBUG: 28751 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:47.164155531 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/crond", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-10-02 14:47:54,755 INFO: 28751 -- Removing container: docker-puppet-crond", > "2018-10-02 14:47:54,793 DEBUG: 28751 -- docker-puppet-crond", > "2018-10-02 14:47:54,794 INFO: 28751 -- Finished processing puppet configs for crond", > "2018-10-02 14:47:54,794 INFO: 28751 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 14:47:54,794 DEBUG: 28751 -- config_volume haproxy", > "2018-10-02 14:47:54,794 DEBUG: 28751 -- puppet_tags file,file_line,concat,augeas,cron,haproxy_config", > "2018-10-02 14:47:54,794 DEBUG: 28751 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "2018-10-02 14:47:54,794 DEBUG: 28751 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 14:47:54,794 DEBUG: 28751 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-10-02 14:47:54,794 DEBUG: 28751 -- check_mode 0", > "2018-10-02 14:47:54,795 INFO: 28751 -- Removing container: docker-puppet-haproxy", > "2018-10-02 14:47:54,858 INFO: 28751 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 14:47:58,656 DEBUG: 28751 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-haproxy ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-haproxy", > "21ef70eb8347: Pulling fs layer", > "21ef70eb8347: Verifying Checksum", > "21ef70eb8347: Download complete", > "21ef70eb8347: Pull complete", > "Digest: sha256:02d95b40692b62a39f6c507d29db6c493db41ee4905a1c4d7aefbd1b0324cea9", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 14:47:58,659 DEBUG: 28751 -- NET_HOST enabled", > "2018-10-02 14:47:58,659 DEBUG: 28751 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-haproxy --env PUPPET_TAGS=file,file_line,concat,augeas,cron,haproxy_config --env NAME=haproxy --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpqYPtwM:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/ipa/ca.crt:/etc/ipa/ca.crt:ro --volume /etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro --volume /etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro --volume /etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-26.1", > "2018-10-02 14:48:00,481 DEBUG: 28750 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v1_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v2_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: created", > "Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 3.01 seconds", > " Total: 44", > " Success: 44", > " Total: 255", > " Out of sync: 44", > " Changed: 44", > " Skipped: 60", > " Glance cache config: 0.24", > " Last run: 1538491679", > " Glance api config: 2.05", > " Config retrieval: 2.64", > " Total: 5.00", > "Gathering files modified after 2018-10-02 14:47:47.788159325 +0000", > "2018-10-02 14:48:00,481 DEBUG: 28750 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config'", > "+ origin_of_time=/var/lib/config-data/glance_api.origin_of_time", > "+ touch /var/lib/config-data/glance_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/config.pp\", 48]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/glance/api.pp\", 198]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/api/db.pp\", 69]:[\"/etc/puppet/modules/glance/manifests/api.pp\", 371]", > "Warning: Unknown variable: 'default_store_real'. at /etc/puppet/modules/glance/manifests/api.pp:438:9", > "Warning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to http", > "Warning: Scope(Class[Glance::Api::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/glance_api", > "++ stat -c %y /var/lib/config-data/glance_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:47.788159325 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/glance_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/glance_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/glance_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/glance_api", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/glance_api --mtime=1970-01-01", > "2018-10-02 14:48:00,481 INFO: 28750 -- Removing container: docker-puppet-glance_api", > "2018-10-02 14:48:00,537 DEBUG: 28750 -- docker-puppet-glance_api", > "2018-10-02 14:48:00,537 INFO: 28750 -- Finished processing puppet configs for glance_api", > "2018-10-02 14:48:00,537 INFO: 28750 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 14:48:00,538 DEBUG: 28750 -- config_volume rabbitmq", > "2018-10-02 14:48:00,538 DEBUG: 28750 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-10-02 14:48:00,538 DEBUG: 28750 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "2018-10-02 14:48:00,538 DEBUG: 28750 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 14:48:00,538 DEBUG: 28750 -- volumes []", > "2018-10-02 14:48:00,538 DEBUG: 28750 -- check_mode 0", > "2018-10-02 14:48:00,539 INFO: 28750 -- Removing container: docker-puppet-rabbitmq", > "2018-10-02 14:48:00,604 INFO: 28750 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 14:48:00,671 DEBUG: 28749 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.94 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Lvm/Augeas[udev options in lvm.conf]/returns: executed successfully", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}9fdbbecae936c76d6b860e2717c7ac5c'", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created", > "Notice: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/default_volume_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_user]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_chunk_size]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_pool]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_unit]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_count]/ensure: created", > "Notice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Policy/Oslo::Policy[cinder_config]/Cinder_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Oslo::Middleware[cinder_config]/Cinder_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[cinder_wsgi]/ensure: defined content as '{md5}870efbe437d63cd260287cd36472d7b1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/Apache::Vhost[cinder_wsgi]/Concat[10-cinder_wsgi.conf]/File[/etc/httpd/conf.d/10-cinder_wsgi.conf]/ensure: defined content as '{md5}96b5e0c01cef024226487b83f94d4c9b'", > "Notice: Applied catalog in 4.86 seconds", > " Total: 133", > " Success: 133", > " Changed: 133", > " Out of sync: 133", > " Skipped: 37", > " Total: 370", > " File line: 0.00", > " File: 0.30", > " Augeas: 0.63", > " Last run: 1538491678", > " Cinder config: 3.26", > " Config retrieval: 4.51", > " Total: 8.75", > " Config: 1538491669", > "Gathering files modified after 2018-10-02 14:47:42.206124817 +0000", > "2018-10-02 14:48:00,672 DEBUG: 28749 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/cinder.origin_of_time", > "+ touch /var/lib/config-data/cinder.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/api.pp\", 203]:[\"/etc/config.pp\", 2]", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_admin_info parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_info parameter is deprecated, has no effect and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/cinder", > "++ stat -c %y /var/lib/config-data/cinder.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:42.206124817 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/cinder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/cinder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/cinder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/cinder", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/cinder --mtime=1970-01-01", > "2018-10-02 14:48:00,672 INFO: 28749 -- Removing container: docker-puppet-cinder", > "2018-10-02 14:48:00,726 DEBUG: 28749 -- docker-puppet-cinder", > "2018-10-02 14:48:00,726 INFO: 28749 -- Finished processing puppet configs for cinder", > "2018-10-02 14:48:00,727 INFO: 28749 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:48:00,727 DEBUG: 28749 -- config_volume swift", > "2018-10-02 14:48:00,727 DEBUG: 28749 -- puppet_tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-10-02 14:48:00,727 DEBUG: 28749 -- manifest include ::tripleo::profile::base::swift::proxy", > "include ::tripleo::profile::base::swift::storage", > "2018-10-02 14:48:00,727 DEBUG: 28749 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:48:00,727 DEBUG: 28749 -- volumes []", > "2018-10-02 14:48:00,727 DEBUG: 28749 -- check_mode 0", > "2018-10-02 14:48:00,729 INFO: 28749 -- Removing container: docker-puppet-swift", > "2018-10-02 14:48:00,776 INFO: 28749 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:48:00,780 DEBUG: 28749 -- NET_HOST enabled", > "2018-10-02 14:48:00,780 DEBUG: 28749 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server --env NAME=swift --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpBk4kyJ:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-26.1", > "2018-10-02 14:48:05,241 DEBUG: 28750 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-rabbitmq ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-rabbitmq", > "7631898d5513: Pulling fs layer", > "7631898d5513: Verifying Checksum", > "7631898d5513: Download complete", > "7631898d5513: Pull complete", > "Digest: sha256:a77a6ab407a3f4020e73c1dc1548581abaeeacfdfb4c397b44d307beeedc98b4", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 14:48:05,244 DEBUG: 28750 -- NET_HOST enabled", > "2018-10-02 14:48:05,244 DEBUG: 28750 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-rabbitmq --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=rabbitmq --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp1t6J6V:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-26.1", > "2018-10-02 14:48:09,851 DEBUG: 28751 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.98 seconds", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}fd47d817eca498ff214a0e09f1d145d2'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.41 seconds", > " Changed: 1", > " Out of sync: 1", > " Total: 76", > " File: 0.08", > " Last run: 1538491688", > " Config retrieval: 3.26", > " Total: 3.35", > " Config: 1538491685", > "Gathering files modified after 2018-10-02 14:47:58.830225544 +0000", > "2018-10-02 14:48:09,851 DEBUG: 28751 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/haproxy", > "++ stat -c %y /var/lib/config-data/haproxy.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:47:58.830225544 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/haproxy", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/haproxy", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/haproxy.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/haproxy", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/haproxy --mtime=1970-01-01", > "2018-10-02 14:48:09,851 INFO: 28751 -- Removing container: docker-puppet-haproxy", > "2018-10-02 14:48:09,888 DEBUG: 28751 -- docker-puppet-haproxy", > "2018-10-02 14:48:09,889 INFO: 28751 -- Finished processing puppet configs for haproxy", > "2018-10-02 14:48:09,889 INFO: 28751 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:48:09,889 DEBUG: 28751 -- config_volume ceilometer", > "2018-10-02 14:48:09,889 DEBUG: 28751 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config", > "2018-10-02 14:48:09,889 DEBUG: 28751 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-10-02 14:48:09,889 DEBUG: 28751 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:48:09,889 DEBUG: 28751 -- volumes []", > "2018-10-02 14:48:09,889 DEBUG: 28751 -- check_mode 0", > "2018-10-02 14:48:09,890 INFO: 28751 -- Removing container: docker-puppet-ceilometer", > "2018-10-02 14:48:09,958 INFO: 28751 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:48:11,382 DEBUG: 28749 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.92 seconds", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/api_class]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/username]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.14:11211'", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created", > "Notice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/rsync]/ensure: defined content as '{md5}d70b9e638f9e3d43d4680dcfcc952a89'", > "Notice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{md5}c63fccb45c0dcbbbe17d0f4bdba920ec' to '{md5}e42386ca0f3eeeebaa42793a97bbdf94'", > "Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed '%SWIFT_HASH_PATH_SUFFIX%' to 'VdgkIMr94WocvP5mcFjrca7al'", > "Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed '8' to 'auto'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed 'catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' to 'catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken s3api s3token keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes proxy-logging proxy-server'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.14:11211'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/operator_roles]/value: value changed 'admin, SwiftOperator' to 'admin, swiftoperator, ResellerAdmin'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/signing_dir]/value: value changed '/tmp/keystone-signing-swift' to '/var/cache/swift'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_plugin]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/url_base]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed 'false' to 'True'", > "Notice: /Stage[main]/Swift::Proxy::Container_quotas/Swift_proxy_config[filter:container_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Account_quotas/Swift_proxy_config[filter:account_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/disable_encryption]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/keymaster_config_path]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/auth_pipeline_check]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_uri]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node/d1]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/ensure: defined content as '{md5}2875dd3787c50623c523cfd4ff89cc89'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/ensure: defined content as '{md5}7c3beb5150bca7a9677cd67ebb703c42'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/ensure: defined content as '{md5}3c609a01c7eb0082ae8c97d9a4091b00'", > "Notice: Applied catalog in 0.67 seconds", > " Total: 97", > " Success: 97", > " Total: 192", > " Out of sync: 97", > " Changed: 97", > " Swift config: 0.00", > " Swift keymaster config: 0.01", > " Swift object expirer config: 0.01", > " File: 0.04", > " Swift proxy config: 0.23", > " Last run: 1538491690", > " Config retrieval: 2.29", > " Total: 2.59", > " Config: 1538491687", > "Gathering files modified after 2018-10-02 14:48:00.996238196 +0000", > "2018-10-02 14:48:11,383 DEBUG: 28749 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server'", > "+ origin_of_time=/var/lib/config-data/swift.origin_of_time", > "+ touch /var/lib/config-data/swift.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 147]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 163]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 165]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > "Warning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56", > "Warning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56", > "Warning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56", > "Warning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56", > "Warning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release", > "Warning: Class 'xinetd' is already defined at /etc/config.pp:6; cannot redefine at /etc/puppet/modules/xinetd/manifests/init.pp:12", > "Warning: Unknown variable: 'xinetd::params::default_user'. at /etc/puppet/modules/xinetd/manifests/service.pp:110:14", > "Warning: Unknown variable: 'xinetd::params::default_group'. at /etc/puppet/modules/xinetd/manifests/service.pp:116:15", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:161:13", > "Warning: Unknown variable: 'xinetd::service_name'. at /etc/puppet/modules/xinetd/manifests/service.pp:166:24", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:167:21", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 189]:", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 203]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift", > "++ stat -c %y /var/lib/config-data/swift.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:48:00.996238196 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/swift", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/swift --mtime=1970-01-01", > "2018-10-02 14:48:11,383 INFO: 28749 -- Removing container: docker-puppet-swift", > "2018-10-02 14:48:11,423 DEBUG: 28749 -- docker-puppet-swift", > "2018-10-02 14:48:11,423 INFO: 28749 -- Finished processing puppet configs for swift", > "2018-10-02 14:48:11,424 INFO: 28749 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 14:48:11,424 DEBUG: 28749 -- config_volume heat_api_cfn", > "2018-10-02 14:48:11,424 DEBUG: 28749 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-10-02 14:48:11,424 DEBUG: 28749 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-10-02 14:48:11,424 DEBUG: 28749 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 14:48:11,424 DEBUG: 28749 -- volumes []", > "2018-10-02 14:48:11,424 DEBUG: 28749 -- check_mode 0", > "2018-10-02 14:48:11,425 INFO: 28749 -- Removing container: docker-puppet-heat_api_cfn", > "2018-10-02 14:48:11,498 INFO: 28749 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 14:48:12,148 DEBUG: 28749 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn", > "d1bf34aac9d8: Already exists", > "814880d697ca: Pulling fs layer", > "814880d697ca: Verifying Checksum", > "814880d697ca: Download complete", > "814880d697ca: Pull complete", > "Digest: sha256:83df23b0a5e5290012456aa81f05f4c3df8b4dea4e0e6a53f8392ca4cd9f0067", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 14:48:12,151 DEBUG: 28749 -- NET_HOST enabled", > "2018-10-02 14:48:12,151 DEBUG: 28749 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api_cfn --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api_cfn --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpA7QUis:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-26.1", > "2018-10-02 14:48:12,188 DEBUG: 28751 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "5fcda0d83a5e: Pulling fs layer", > "2142eca15b92: Pulling fs layer", > "5fcda0d83a5e: Verifying Checksum", > "5fcda0d83a5e: Download complete", > "2142eca15b92: Verifying Checksum", > "2142eca15b92: Download complete", > "5fcda0d83a5e: Pull complete", > "2142eca15b92: Pull complete", > "Digest: sha256:ba6a24fd5b438c2530cbd903d1b4616e6075f146618be39391273ae43949bbad", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:48:12,191 DEBUG: 28751 -- NET_HOST enabled", > "2018-10-02 14:48:12,192 DEBUG: 28751 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpQNb7zm:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-26.1", > "2018-10-02 14:48:18,484 DEBUG: 28750 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.73 seconds", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}0867ff59447ab9037d8df766412aa2f4'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}d09697909bd4d0571b803f88b4447ae3'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.06 seconds", > " Total: 12", > " Success: 12", > " Total: 19", > " Out of sync: 9", > " Changed: 9", > " Config retrieval: 0.83", > " Total: 0.86", > " Last run: 1538491697", > " Config: 1538491696", > "Gathering files modified after 2018-10-02 14:48:05.413263709 +0000", > "2018-10-02 14:48:18,485 DEBUG: 28750 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/rabbitmq.origin_of_time", > "+ touch /var/lib/config-data/rabbitmq.origin_of_time", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/rabbitmq", > "++ stat -c %y /var/lib/config-data/rabbitmq.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:48:05.413263709 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/rabbitmq", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/rabbitmq", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/rabbitmq.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/rabbitmq", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/rabbitmq --mtime=1970-01-01", > "2018-10-02 14:48:18,485 INFO: 28750 -- Removing container: docker-puppet-rabbitmq", > "2018-10-02 14:48:18,541 DEBUG: 28750 -- docker-puppet-rabbitmq", > "2018-10-02 14:48:18,542 INFO: 28750 -- Finished processing puppet configs for rabbitmq", > "2018-10-02 14:48:18,542 INFO: 28750 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:48:18,542 DEBUG: 28750 -- config_volume neutron", > "2018-10-02 14:48:18,542 DEBUG: 28750 -- puppet_tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-10-02 14:48:18,542 DEBUG: 28750 -- manifest include tripleo::profile::base::neutron::server", > "include ::tripleo::profile::base::neutron::plugins::ml2", > "include tripleo::profile::base::neutron::dhcp", > "include tripleo::profile::base::neutron::l3", > "include tripleo::profile::base::neutron::metadata", > "include ::tripleo::profile::base::neutron::ovs", > "2018-10-02 14:48:18,542 DEBUG: 28750 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:48:18,542 DEBUG: 28750 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-10-02 14:48:18,542 DEBUG: 28750 -- check_mode 0", > "2018-10-02 14:48:18,543 INFO: 28750 -- Removing container: docker-puppet-neutron", > "2018-10-02 14:48:18,606 INFO: 28750 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:48:21,955 DEBUG: 28751 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.42 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/File[event_pipeline]/ensure: defined content as '{md5}e1b13cf3e430a5cacf9cd8ad4704c3b5'", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.71 seconds", > " Total: 26", > " Success: 26", > " Total: 156", > " Out of sync: 26", > " Changed: 26", > " Skipped: 35", > " Ceilometer config: 0.54", > " Config retrieval: 1.67", > " Last run: 1538491700", > " Total: 2.22", > " Config: 1538491698", > "Gathering files modified after 2018-10-02 14:48:12.454303467 +0000", > "2018-10-02 14:48:21,955 DEBUG: 28751 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > "Warning: Scope(Class[Ceilometer::Dispatcher::Gnocchi]): The class ceilometer::dispatcher::gnocchi is deprecated. All its", > " options must be set as url parameters in", > " ceilometer::agent::notification::pipeline_publishers. Depending of the used", > " Gnocchi version their might be ignored.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/agent/notification.pp\", 118]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer/agent/notification.pp\", 34]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:48:12.454303467 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/ceilometer", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-10-02 14:48:21,955 INFO: 28751 -- Removing container: docker-puppet-ceilometer", > "2018-10-02 14:48:22,012 DEBUG: 28751 -- docker-puppet-ceilometer", > "2018-10-02 14:48:22,012 INFO: 28751 -- Finished processing puppet configs for ceilometer", > "2018-10-02 14:48:24,311 DEBUG: 28750 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight", > "f3c66d22e08b: Pulling fs layer", > "6cca3e1c80e1: Pulling fs layer", > "d405f46408bf: Pulling fs layer", > "d405f46408bf: Verifying Checksum", > "d405f46408bf: Download complete", > "6cca3e1c80e1: Verifying Checksum", > "6cca3e1c80e1: Download complete", > "f3c66d22e08b: Verifying Checksum", > "f3c66d22e08b: Download complete", > "f3c66d22e08b: Pull complete", > "6cca3e1c80e1: Pull complete", > "d405f46408bf: Pull complete", > "Digest: sha256:0c7ace86b7c08a5ec94dbf283b5a7a95f0678caf8c830185bcfc7a5dbaec5704", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:48:24,314 DEBUG: 28750 -- NET_HOST enabled", > "2018-10-02 14:48:24,314 DEBUG: 28750 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpLHqfT4:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-26.1", > "2018-10-02 14:48:27,292 DEBUG: 28749 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.16 seconds", > "Notice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}f24b033bb52d09b8010d13d68a3ac67a'", > "Notice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{md5}96094c96352002c43ada5bdf8650ff38'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{md5}c3ae61ab87649c8cdfab8977da2b194b'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{md5}9be2d77b7acd1f14ee959f77d3f2f65f'", > "Notice: Applied catalog in 2.67 seconds", > " Total: 122", > " Success: 122", > " Changed: 122", > " Out of sync: 122", > " Total: 338", > " File: 0.33", > " Heat config: 1.61", > " Last run: 1538491705", > " Total: 6.65", > "Gathering files modified after 2018-10-02 14:48:12.379303047 +0000", > "2018-10-02 14:48:27,292 DEBUG: 28749 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat_api_cfn.origin_of_time", > "+ touch /var/lib/config-data/heat_api_cfn.origin_of_time", > " with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp\", 125]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api_cfn", > "++ stat -c %y /var/lib/config-data/heat_api_cfn.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:48:12.379303047 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api_cfn", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api_cfn", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api_cfn.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/heat_api_cfn", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/heat_api_cfn --mtime=1970-01-01", > "2018-10-02 14:48:27,292 INFO: 28749 -- Removing container: docker-puppet-heat_api_cfn", > "2018-10-02 14:48:27,339 DEBUG: 28749 -- docker-puppet-heat_api_cfn", > "2018-10-02 14:48:27,340 INFO: 28749 -- Finished processing puppet configs for heat_api_cfn", > "2018-10-02 14:48:37,338 DEBUG: 28750 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.22 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_local_resolv]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 1.66 seconds", > " Total: 105", > " Success: 105", > " Changed: 105", > " Out of sync: 105", > " Total: 358", > " Skipped: 44", > " Neutron api config: 0.00", > " Neutron l3 agent config: 0.01", > " Neutron agent ovs: 0.02", > " Neutron metadata agent config: 0.02", > " Neutron plugin ml2: 0.02", > " Neutron dhcp agent config: 0.08", > " Neutron config: 1.23", > " Last run: 1538491716", > " Config retrieval: 3.56", > " Config: 1538491710", > "Gathering files modified after 2018-10-02 14:48:24.507369089 +0000", > "2018-10-02 14:48:37,338 DEBUG: 28750 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 492]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/server.pp\", 104]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 136]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/db.pp\", 69]:[\"/etc/puppet/modules/neutron/manifests/server.pp\", 290]", > "Warning: Scope(Class[Neutron::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: '::neutron::params::metadata_agent_package'. at /etc/puppet/modules/neutron/manifests/agents/metadata.pp:122:6", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:48:24.507369089 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/neutron", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-10-02 14:48:37,338 INFO: 28750 -- Removing container: docker-puppet-neutron", > "2018-10-02 14:48:37,371 DEBUG: 28750 -- docker-puppet-neutron", > "2018-10-02 14:48:37,371 INFO: 28750 -- Finished processing puppet configs for neutron", > "2018-10-02 14:48:37,371 INFO: 28750 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 14:48:37,371 DEBUG: 28750 -- config_volume horizon", > "2018-10-02 14:48:37,371 DEBUG: 28750 -- puppet_tags file,file_line,concat,augeas,cron,horizon_config", > "2018-10-02 14:48:37,371 DEBUG: 28750 -- manifest include ::tripleo::profile::base::horizon", > "2018-10-02 14:48:37,371 DEBUG: 28750 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 14:48:37,371 DEBUG: 28750 -- volumes []", > "2018-10-02 14:48:37,371 DEBUG: 28750 -- check_mode 0", > "2018-10-02 14:48:37,372 INFO: 28750 -- Removing container: docker-puppet-horizon", > "2018-10-02 14:48:37,428 INFO: 28750 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 14:48:42,776 DEBUG: 28750 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-horizon ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-horizon", > "e2ca3343265c: Pulling fs layer", > "e2ca3343265c: Download complete", > "e2ca3343265c: Pull complete", > "Digest: sha256:fc09d11276f0250ec232eada31a7417337bdad0257605eb44ff4afc1692e17b5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 14:48:42,779 DEBUG: 28750 -- NET_HOST enabled", > "2018-10-02 14:48:42,779 DEBUG: 28750 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-horizon --env PUPPET_TAGS=file,file_line,concat,augeas,cron,horizon_config --env NAME=horizon --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpUtxc7U:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-26.1", > "2018-10-02 14:48:53,481 DEBUG: 28750 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.58 seconds", > "Notice: /Stage[main]/Apache::Mod::Remoteip/File[remoteip.conf]/ensure: defined content as '{md5}215d3f1a10d5e5269df1af0d37ac0cad'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}086d18e1b6c245d40498c4b7c7faa115'", > "Notice: /Stage[main]/Apache::Mod::Remoteip/Apache::Mod[remoteip]/File[remoteip.load]/ensure: defined content as '{md5}118eb7518a1d018a162d23dfe32c4bad'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{md5}8e891bc57ee752f792938ffd379bd3c7' to '{md5}fe4c600310e3cfe486939f6bfd943807'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/owner: owner changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: group changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[10-horizon_vhost.conf]/File[/etc/httpd/conf.d/10-horizon_vhost.conf]/ensure: defined content as '{md5}ad85709ccb83245ae04ef5fc127019f1'", > "Notice: Applied catalog in 0.66 seconds", > " Total: 86", > " Success: 86", > " Total: 172", > " Out of sync: 84", > " Changed: 84", > " File: 0.23", > " Last run: 1538491732", > " Config retrieval: 2.92", > " Total: 3.16", > " Config: 1538491728", > "Gathering files modified after 2018-10-02 14:48:42.972463862 +0000", > "2018-10-02 14:48:53,481 DEBUG: 28750 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,horizon_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,horizon_config'", > "+ origin_of_time=/var/lib/config-data/horizon.origin_of_time", > "+ touch /var/lib/config-data/horizon.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,horizon_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/horizon.pp\", 97]:[\"/etc/config.pp\", 2]", > "Warning: ModuleLoader: module 'horizon' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Undefined variable ''; ", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 604]:[\"/etc/config.pp\", 2]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 605]:[\"/etc/config.pp\", 2]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 607]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/horizon", > "++ stat -c %y /var/lib/config-data/horizon.origin_of_time", > "+ echo 'Gathering files modified after 2018-10-02 14:48:42.972463862 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/horizon", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/horizon", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/horizon.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/horizon", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/horizon --mtime=1970-01-01", > "2018-10-02 14:48:53,481 INFO: 28750 -- Removing container: docker-puppet-horizon", > "2018-10-02 14:48:53,534 DEBUG: 28750 -- docker-puppet-horizon", > "2018-10-02 14:48:53,534 INFO: 28750 -- Finished processing puppet configs for horizon", > "2018-10-02 14:48:53,535 DEBUG: 28748 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-10-02 14:48:53,535 DEBUG: 28748 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-10-02 14:48:53,537 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-10-02 14:48:53,537 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-10-02 14:48:53,538 DEBUG: 28748 -- Updating config hash for mysql_bootstrap, config_volume=heat_api_cfn hash=1f52b52d4aaf61dff272bf1e40bda698", > "2018-10-02 14:48:53,538 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-10-02 14:48:53,538 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-10-02 14:48:53,539 DEBUG: 28748 -- Updating config hash for rabbitmq_bootstrap, config_volume=heat_api_cfn hash=ab05dc4bac210f7cf9265a041c7815af", > "2018-10-02 14:48:53,539 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-10-02 14:48:53,540 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-10-02 14:48:53,540 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Updating config hash for clustercheck, config_volume=heat_api_cfn hash=a37da986df3cc7cedd322860dcf6c290", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Updating config hash for mysql_restart_bundle, config_volume=heat_api_cfn hash=1f52b52d4aaf61dff272bf1e40bda698", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Updating config hash for haproxy_restart_bundle, config_volume=heat_api_cfn hash=fd47d817eca498ff214a0e09f1d145d2", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Updating config hash for rabbitmq_restart_bundle, config_volume=heat_api_cfn hash=ab05dc4bac210f7cf9265a041c7815af", > "2018-10-02 14:48:53,541 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon/etc", > "2018-10-02 14:48:53,542 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-10-02 14:48:53,542 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-10-02 14:48:53,542 DEBUG: 28748 -- Updating config hash for redis_restart_bundle, config_volume=heat_api_cfn hash=81521878fb23b2f19fd3727191f4f767", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Updating config hash for nova_placement, config_volume=heat_api_cfn hash=a437e3d1bbbcab3d3e9e521cd0d2f185", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Updating config hash for swift_rsync_fix, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/heat/etc/heat.md5sum for config_volume /var/lib/config-data/heat/etc/heat", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/heat/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/heat/etc/my.cnf.d", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data.md5sum for config_volume /var/lib/config-data", > "2018-10-02 14:48:53,544 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/swift/etc", > "2018-10-02 14:48:53,545 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-10-02 14:48:53,545 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-10-02 14:48:53,545 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-10-02 14:48:53,545 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-10-02 14:48:53,545 DEBUG: 28748 -- Updating config hash for keystone_cron, config_volume=heat_api_cfn hash=1d7b10ed9df38d2c7d4b6c765dcac0c8", > "2018-10-02 14:48:53,545 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/panko/etc.md5sum for config_volume /var/lib/config-data/panko/etc", > "2018-10-02 14:48:53,545 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/panko/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/panko/etc/my.cnf.d", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Updating config hash for keystone_db_sync, config_volume=heat_api_cfn hash=1d7b10ed9df38d2c7d4b6c765dcac0c8", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Updating config hash for keystone, config_volume=heat_api_cfn hash=1d7b10ed9df38d2c7d4b6c765dcac0c8", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/aodh/etc/aodh.md5sum for config_volume /var/lib/config-data/aodh/etc/aodh", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/aodh/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/aodh/etc/my.cnf.d", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Updating config hash for neutron_ovs_bridge, config_volume=heat_api_cfn hash=1c4c7a968a49935c4d2e99f6dfe7e123", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-10-02 14:48:53,546 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-10-02 14:48:53,547 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-10-02 14:48:53,547 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-10-02 14:48:53,547 DEBUG: 28748 -- Updating config hash for glance_api_db_sync, config_volume=heat_api_cfn hash=1d28f52529bc88fdb464822dd88c0229", > "2018-10-02 14:48:53,547 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/neutron/etc.md5sum for config_volume /var/lib/config-data/neutron/etc", > "2018-10-02 14:48:53,547 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/neutron/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/neutron/etc/my.cnf.d", > "2018-10-02 14:48:53,547 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/neutron/usr/share.md5sum for config_volume /var/lib/config-data/neutron/usr/share", > "2018-10-02 14:48:53,547 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/sahara/etc/sahara.md5sum for config_volume /var/lib/config-data/sahara/etc/sahara", > "2018-10-02 14:48:53,547 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-10-02 14:48:53,547 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-10-02 14:48:53,547 DEBUG: 28748 -- Updating config hash for horizon, config_volume=heat_api_cfn hash=2671262ed67a24080e5048bb8dc64c74", > "2018-10-02 14:48:53,549 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 14:48:53,549 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 14:48:53,549 DEBUG: 28748 -- Updating config hash for aodh_evaluator, config_volume=heat_api_cfn hash=1346da87433acd3079a601966a832d89", > "2018-10-02 14:48:53,549 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Updating config hash for swift_container_updater, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Updating config hash for nova_scheduler, config_volume=heat_api_cfn hash=6ec267361eea8f7ee72541abbae1f26c", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Updating config hash for swift_object_server, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Updating config hash for cinder_api, config_volume=heat_api_cfn hash=c0cfcb5ea0301578f21058c7d79ad48a", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Updating config hash for swift_proxy, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,550 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Updating config hash for neutron_dhcp, config_volume=heat_api_cfn hash=1c4c7a968a49935c4d2e99f6dfe7e123", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Updating config hash for heat_api, config_volume=heat_api_cfn hash=bf29a85d24265cace52cd05ceecb4501", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Updating config hash for swift_object_auditor, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Updating config hash for neutron_metadata_agent, config_volume=heat_api_cfn hash=1c4c7a968a49935c4d2e99f6dfe7e123", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 14:48:53,551 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Updating config hash for ceilometer_agent_central, config_volume=heat_api_cfn hash=de7997fc8828c6cdda952fe7ae39d5e0", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Updating config hash for swift_account_replicator, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Updating config hash for aodh_notifier, config_volume=heat_api_cfn hash=1346da87433acd3079a601966a832d89", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Updating config hash for swift_container_server, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 14:48:53,552 DEBUG: 28748 -- Updating config hash for nova_api_cron, config_volume=heat_api_cfn hash=6ec267361eea8f7ee72541abbae1f26c", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Updating config hash for nova_consoleauth, config_volume=heat_api_cfn hash=6ec267361eea8f7ee72541abbae1f26c", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Updating config hash for glance_api, config_volume=heat_api_cfn hash=1d28f52529bc88fdb464822dd88c0229", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Updating config hash for swift_account_reaper, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-10-02 14:48:53,553 DEBUG: 28748 -- Updating config hash for ceilometer_agent_notification, config_volume=heat_api_cfn hash=de7997fc8828c6cdda952fe7ae39d5e0-038d212b0a2fac8ea442514c30c440d2", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Updating config hash for nova_vnc_proxy, config_volume=heat_api_cfn hash=6ec267361eea8f7ee72541abbae1f26c", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Updating config hash for swift_rsync, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Updating config hash for nova_api, config_volume=heat_api_cfn hash=6ec267361eea8f7ee72541abbae1f26c", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Updating config hash for aodh_api, config_volume=heat_api_cfn hash=1346da87433acd3079a601966a832d89", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Updating config hash for nova_metadata, config_volume=heat_api_cfn hash=6ec267361eea8f7ee72541abbae1f26c", > "2018-10-02 14:48:53,554 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Updating config hash for heat_engine, config_volume=heat_api_cfn hash=26abc623da117ecdf09bbd38c54c7080", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Updating config hash for heat_api_cron, config_volume=heat_api_cfn hash=bf29a85d24265cace52cd05ceecb4501", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Updating config hash for swift_object_replicator, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Updating config hash for neutron_l3_agent, config_volume=heat_api_cfn hash=1c4c7a968a49935c4d2e99f6dfe7e123", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Updating config hash for cinder_scheduler, config_volume=heat_api_cfn hash=c0cfcb5ea0301578f21058c7d79ad48a", > "2018-10-02 14:48:53,555 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Updating config hash for nova_conductor, config_volume=heat_api_cfn hash=6ec267361eea8f7ee72541abbae1f26c", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Updating config hash for swift_account_server, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Updating config hash for sahara_api, config_volume=heat_api_cfn hash=55ad8aad0407abdb8398b2aa0b1cc437", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Updating config hash for sahara_engine, config_volume=heat_api_cfn hash=55ad8aad0407abdb8398b2aa0b1cc437", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Updating config hash for logrotate_crond, config_volume=heat_api_cfn hash=6f2a5e23a896d70ebbc2c66d87cd9266", > "2018-10-02 14:48:53,556 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,557 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,557 DEBUG: 28748 -- Updating config hash for neutron_ovs_agent, config_volume=heat_api_cfn hash=1c4c7a968a49935c4d2e99f6dfe7e123", > "2018-10-02 14:48:53,557 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,557 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-10-02 14:48:53,557 DEBUG: 28748 -- Updating config hash for swift_account_auditor, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,557 DEBUG: 28748 -- Updating config hash for swift_container_replicator, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,557 DEBUG: 28748 -- Updating config hash for swift_object_updater, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,557 DEBUG: 28748 -- Updating config hash for swift_object_expirer, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Updating config hash for swift_container_auditor, config_volume=heat_api_cfn hash=bf12c7a714b2c8c0e29e95e0f283a0cd", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Updating config hash for panko_api, config_volume=heat_api_cfn hash=038d212b0a2fac8ea442514c30c440d2", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Updating config hash for aodh_listener, config_volume=heat_api_cfn hash=1346da87433acd3079a601966a832d89", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Updating config hash for neutron_api, config_volume=heat_api_cfn hash=1c4c7a968a49935c4d2e99f6dfe7e123", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-10-02 14:48:53,558 DEBUG: 28748 -- Updating config hash for heat_api_cfn, config_volume=heat_api_cfn hash=b4cb8bc25446481484d953167274ee5f", > "2018-10-02 14:48:53,561 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-10-02 14:48:53,561 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-10-02 14:48:53,561 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-10-02 14:48:53,561 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-10-02 14:48:53,561 DEBUG: 28748 -- Updating config hash for gnocchi_api, config_volume=heat_api_cfn hash=dabc53f86aebcf7fa7bfadac996bf1a1", > "2018-10-02 14:48:53,561 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-10-02 14:48:53,561 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-10-02 14:48:53,561 DEBUG: 28748 -- Updating config hash for gnocchi_statsd, config_volume=heat_api_cfn hash=dabc53f86aebcf7fa7bfadac996bf1a1", > "2018-10-02 14:48:53,562 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 14:48:53,562 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 14:48:53,562 DEBUG: 28748 -- Updating config hash for cinder_backup_restart_bundle, config_volume=heat_api_cfn hash=c0cfcb5ea0301578f21058c7d79ad48a", > "2018-10-02 14:48:53,562 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-10-02 14:48:53,562 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-10-02 14:48:53,562 DEBUG: 28748 -- Updating config hash for gnocchi_metricd, config_volume=heat_api_cfn hash=dabc53f86aebcf7fa7bfadac996bf1a1", > "2018-10-02 14:48:53,562 DEBUG: 28748 -- Updating config hash for gnocchi_db_sync, config_volume=heat_api_cfn hash=dabc53f86aebcf7fa7bfadac996bf1a1", > "2018-10-02 14:48:53,562 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-10-02 14:48:53,563 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-10-02 14:48:53,563 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/ceilometer/etc/ceilometer.md5sum for config_volume /var/lib/config-data/ceilometer/etc/ceilometer", > "2018-10-02 14:48:53,563 DEBUG: 28748 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 14:48:53,563 DEBUG: 28748 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-10-02 14:48:53,563 DEBUG: 28748 -- Updating config hash for cinder_volume_restart_bundle, config_volume=heat_api_cfn hash=c0cfcb5ea0301578f21058c7d79ad48a", > "2018-10-02 14:48:53,563 DEBUG: 28748 -- Updating config hash for cinder_api_cron, config_volume=heat_api_cfn hash=c0cfcb5ea0301578f21058c7d79ad48a" > ] >} >2018-10-02 10:48:55,357 p=605 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 10:48:55,357 p=605 u=mistral | Tuesday 02 October 2018 10:48:55 -0400 (0:00:01.427) 0:09:25.630 ******* >2018-10-02 10:48:55,392 p=605 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:48:55,426 p=605 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:48:55,441 p=605 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:48:55,473 p=605 u=mistral | TASK [Diff docker-puppet.py puppet-generated changes for check mode] *********** >2018-10-02 10:48:55,473 p=605 u=mistral | Tuesday 02 October 2018 10:48:55 -0400 (0:00:00.116) 0:09:25.746 ******* >2018-10-02 10:48:55,506 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:48:55,539 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:48:55,554 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:48:55,586 p=605 u=mistral | TASK [Start containers for step 1] ********************************************* >2018-10-02 10:48:55,587 p=605 u=mistral | Tuesday 02 October 2018 10:48:55 -0400 (0:00:00.113) 0:09:25.860 ******* >2018-10-02 10:48:56,133 p=605 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:48:56,174 p=605 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:49:24,170 p=605 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:49:24,197 p=605 u=mistral | TASK [Debug output for task: Start containers for step 1] ********************** >2018-10-02 10:49:24,197 p=605 u=mistral | Tuesday 02 October 2018 10:49:24 -0400 (0:00:28.610) 0:09:54.471 ******* >2018-10-02 10:49:24,275 p=605 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-backup ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-backup", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "0f4899fadd7f: Already exists", > "ff59208988ad: Already exists", > "58cfa97883f0: Already exists", > "b22bc33202f5: Pulling fs layer", > "b22bc33202f5: Verifying Checksum", > "b22bc33202f5: Download complete", > "b22bc33202f5: Pull complete", > "Digest: sha256:9be80516b13b878894cae03aae4bd4f039c4deace2065b4f76e804e2272b208f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-26.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-volume ... ", > "2018-09-26.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-volume", > "c3ba3ad5e66e: Pulling fs layer", > "c3ba3ad5e66e: Verifying Checksum", > "c3ba3ad5e66e: Download complete", > "c3ba3ad5e66e: Pull complete", > "Digest: sha256:d507723333640d3a4288adc083ee03560e5a216c19584c673f802cab5ee4e6bc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-26.1", > "stdout: ", > "stdout: a391e6a10adf4b0568bdda430b539d7ade3a1dda0996412ba1d0d6da00c8de76", > "stdout: 8ca990973a6cc9346797fb832a54a36ba86009402087106ebcbcf359376f8068", > "stdout: Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...", > "OK", > "Filling help tables...", > "Creating OpenGIS required SP-s...", > "To start mysqld at boot time you have to copy", > "support-files/mysql.server to the right place for your system", > "PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !", > "To do so, start the server, then issue the following commands:", > "'/usr/bin/mysqladmin' -u root password 'new-password'", > "'/usr/bin/mysqladmin' -u root -h controller-0 password 'new-password'", > "Alternatively you can run:", > "'/usr/bin/mysql_secure_installation'", > "which will also give you the option of removing the test", > "databases and anonymous user created by default. This is", > "strongly recommended for production servers.", > "See the MariaDB Knowledgebase at http://mariadb.com/kb or the", > "MySQL manual for more instructions.", > "You can start the MariaDB daemon with:", > "cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'", > "You can test the MariaDB daemon with mysql-test-run.pl", > "cd '/usr/mysql-test' ; perl mysql-test-run.pl", > "Please report any problems at http://mariadb.org/jira", > "The latest information about MariaDB is available at http://mariadb.org/.", > "You can find additional information about the MySQL part at:", > "http://dev.mysql.com", > "Consider joining MariaDB's strong and vibrant community:", > "https://mariadb.org/get-involved/", > "181002 14:49:15 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "181002 14:49:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "spawn mysql_secure_installation", > "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB", > " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", > "In order to log into MariaDB to secure it, we'll need the current", > "password for the root user. If you've just installed MariaDB, and", > "you haven't set the root password yet, the password will be blank,", > "so you should just press enter here.", > "Enter current password for root (enter for none): ", > "OK, successfully used password, moving on...", > "Setting the root password ensures that nobody can log into the MariaDB", > "root user without the proper authorisation.", > "Set root password? [Y/n] y", > "New password: ", > "Re-enter new password: ", > "Password updated successfully!", > "Reloading privilege tables..", > " ... Success!", > "By default, a MariaDB installation has an anonymous user, allowing anyone", > "to log into MariaDB without having to have a user account created for", > "them. This is intended only for testing, and to make the installation", > "go a bit smoother. You should remove them before moving into a", > "production environment.", > "Remove anonymous users? [Y/n] y", > "Normally, root should only be allowed to connect from 'localhost'. This", > "ensures that someone cannot guess at the root password from the network.", > "Disallow root login remotely? [Y/n] n", > " ... skipping.", > "By default, MariaDB comes with a database named 'test' that anyone can", > "access. This is also intended only for testing, and should be removed", > "before moving into a production environment.", > "Remove test database and access to it? [Y/n] y", > " - Dropping test database...", > " - Removing privileges on test database...", > "Reloading the privilege tables will ensure that all changes made so far", > "will take effect immediately.", > "Reload privilege tables now? [Y/n] y", > "Cleaning up...", > "All done! If you've completed all of the above steps, your MariaDB", > "installation should now be secure.", > "Thanks for using MariaDB!", > "181002 14:49:18 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "181002 14:49:19 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "181002 14:49:19 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "mysqld is alive", > "181002 14:49:22 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "stderr: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Copying /dev/null to /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Setting permission for /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Deleting /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/galera.cnf to /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/sysconfig/clustercheck to /etc/sysconfig/clustercheck", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/root/.my.cnf to /root/.my.cnf", > "INFO:__main__:Writing out command to execute", > "2018-10-02 14:49:02 140617208215744 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-10-02 14:49:02 140617208215744 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 42 ...", > "2018-10-02 14:49:06 139717215103168 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-10-02 14:49:06 139717215103168 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 71 ...", > "2018-10-02 14:49:10 140603929888960 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-10-02 14:49:10 140603929888960 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 101 ...", > "/usr/bin/mysqld_safe: line 755: ulimit: -1: invalid option", > "ulimit: usage: ulimit [-SHacdefilmnpqrstuvx] [limit]" > ] >} >2018-10-02 10:49:24,282 p=605 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-10-02 10:49:24,323 p=605 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-10-02 10:49:24,350 p=605 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks1.json exists] ******** >2018-10-02 10:49:24,350 p=605 u=mistral | Tuesday 02 October 2018 10:49:24 -0400 (0:00:00.153) 0:09:54.624 ******* >2018-10-02 10:49:24,558 p=605 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:49:24,604 p=605 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:49:24,665 p=605 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-10-02 10:49:24,696 p=605 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 1] ******************** >2018-10-02 10:49:24,696 p=605 u=mistral | Tuesday 02 October 2018 10:49:24 -0400 (0:00:00.345) 0:09:54.969 ******* >2018-10-02 10:49:24,731 p=605 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:49:24,763 p=605 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:49:24,778 p=605 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-10-02 10:49:24,808 p=605 u=mistral | TASK [Debug output for task: Run docker-puppet tasks (bootstrap tasks) for step 1] *** >2018-10-02 10:49:24,808 p=605 u=mistral | Tuesday 02 October 2018 10:49:24 -0400 (0:00:00.112) 0:09:55.082 ******* >2018-10-02 10:49:24,849 p=605 u=mistral | skipping: [controller-0] => {} >2018-10-02 10:49:24,883 p=605 u=mistral | skipping: [compute-0] => {} >2018-10-02 10:49:24,898 p=605 u=mistral | skipping: [ceph-0] => {} >2018-10-02 10:49:24,905 p=605 u=mistral | PLAY [External deployment step 2] ********************************************** >2018-10-02 10:49:24,925 p=605 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-10-02 10:49:24,925 p=605 u=mistral | Tuesday 02 October 2018 10:49:24 -0400 (0:00:00.117) 0:09:55.199 ******* >2018-10-02 10:49:24,947 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:49:24,961 p=605 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-10-02 10:49:24,961 p=605 u=mistral | Tuesday 02 October 2018 10:49:24 -0400 (0:00:00.035) 0:09:55.234 ******* >2018-10-02 10:49:24,991 p=605 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-10-02 10:49:24,997 p=605 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-10-02 10:49:25,004 p=605 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-10-02 10:49:25,020 p=605 u=mistral | TASK [generate inventory] ****************************************************** >2018-10-02 10:49:25,020 p=605 u=mistral | Tuesday 02 October 2018 10:49:25 -0400 (0:00:00.058) 0:09:55.293 ******* >2018-10-02 10:49:25,039 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:49:25,053 p=605 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-10-02 10:49:25,053 p=605 u=mistral | Tuesday 02 October 2018 10:49:25 -0400 (0:00:00.033) 0:09:55.326 ******* >2018-10-02 10:49:25,077 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:49:25,091 p=605 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-10-02 10:49:25,091 p=605 u=mistral | Tuesday 02 October 2018 10:49:25 -0400 (0:00:00.038) 0:09:55.365 ******* >2018-10-02 10:49:25,112 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:49:25,127 p=605 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-10-02 10:49:25,127 p=605 u=mistral | Tuesday 02 October 2018 10:49:25 -0400 (0:00:00.035) 0:09:55.401 ******* >2018-10-02 10:49:25,151 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:49:25,164 p=605 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-10-02 10:49:25,165 p=605 u=mistral | Tuesday 02 October 2018 10:49:25 -0400 (0:00:00.037) 0:09:55.438 ******* >2018-10-02 10:49:25,184 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:49:25,199 p=605 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-10-02 10:49:25,199 p=605 u=mistral | Tuesday 02 October 2018 10:49:25 -0400 (0:00:00.034) 0:09:55.472 ******* >2018-10-02 10:49:25,219 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:49:25,233 p=605 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-10-02 10:49:25,234 p=605 u=mistral | Tuesday 02 October 2018 10:49:25 -0400 (0:00:00.034) 0:09:55.507 ******* >2018-10-02 10:49:25,264 p=605 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-10-02 10:49:25,278 p=605 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-10-02 10:49:25,278 p=605 u=mistral | Tuesday 02 October 2018 10:49:25 -0400 (0:00:00.044) 0:09:55.551 ******* >2018-10-02 10:49:27,913 p=605 u=mistral | changed: [undercloud] => {"changed": true, "cmd": "ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_command.log\" ANSIBLE_CONFIG=\"/var/lib/mistral/overcloud/ansible.cfg\" ANSIBLE_REMOTE_TEMP=/tmp/nodes_uuid_tmp ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml /var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_playbook.yml", "delta": "0:00:02.458026", "end": "2018-10-02 10:49:27.892269", "rc": 0, "start": "2018-10-02 10:49:25.434243", "stderr": "", "stderr_lines": [], "stdout": "\nPLAY [all] *********************************************************************\n\nTASK [set nodes data] **********************************************************\nTuesday 02 October 2018 10:49:26 -0400 (0:00:00.085) 0:00:00.085 ******* \nok: [ceph-0]\nok: [compute-0]\nok: [controller-0]\n\nTASK [register machine id] *****************************************************\nTuesday 02 October 2018 10:49:26 -0400 (0:00:00.072) 0:00:00.158 ******* \nchanged: [ceph-0]\nchanged: [controller-0]\nchanged: [compute-0]\n\nTASK [generate host vars from nodes data] **************************************\nTuesday 02 October 2018 10:49:27 -0400 (0:00:00.321) 0:00:00.479 ******* \nok: [controller-0 -> localhost]\nok: [ceph-0 -> localhost]\nok: [compute-0 -> localhost]\n\nPLAY RECAP *********************************************************************\nceph-0 : ok=3 changed=1 unreachable=0 failed=0 \ncompute-0 : ok=3 changed=1 unreachable=0 failed=0 \ncontroller-0 : ok=3 changed=1 unreachable=0 failed=0 \n\nTuesday 02 October 2018 10:49:27 -0400 (0:00:00.590) 0:00:01.070 ******* \n=============================================================================== ", "stdout_lines": ["", "PLAY [all] *********************************************************************", "", "TASK [set nodes data] **********************************************************", "Tuesday 02 October 2018 10:49:26 -0400 (0:00:00.085) 0:00:00.085 ******* ", "ok: [ceph-0]", "ok: [compute-0]", "ok: [controller-0]", "", "TASK [register machine id] *****************************************************", "Tuesday 02 October 2018 10:49:26 -0400 (0:00:00.072) 0:00:00.158 ******* ", "changed: [ceph-0]", "changed: [controller-0]", "changed: [compute-0]", "", "TASK [generate host vars from nodes data] **************************************", "Tuesday 02 October 2018 10:49:27 -0400 (0:00:00.321) 0:00:00.479 ******* ", "ok: [controller-0 -> localhost]", "ok: [ceph-0 -> localhost]", "ok: [compute-0 -> localhost]", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=3 changed=1 unreachable=0 failed=0 ", "compute-0 : ok=3 changed=1 unreachable=0 failed=0 ", "controller-0 : ok=3 changed=1 unreachable=0 failed=0 ", "", "Tuesday 02 October 2018 10:49:27 -0400 (0:00:00.590) 0:00:01.070 ******* ", "=============================================================================== "]} >2018-10-02 10:49:27,929 p=605 u=mistral | TASK [set ceph-ansible params from Heat] *************************************** >2018-10-02 10:49:27,929 p=605 u=mistral | Tuesday 02 October 2018 10:49:27 -0400 (0:00:02.651) 0:09:58.202 ******* >2018-10-02 10:49:27,969 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbook_verbosity": 2, "ceph_ansible_playbooks_param": ["default"]}, "changed": false} >2018-10-02 10:49:27,984 p=605 u=mistral | TASK [set ceph-ansible playbooks] ********************************************** >2018-10-02 10:49:27,985 p=605 u=mistral | Tuesday 02 October 2018 10:49:27 -0400 (0:00:00.055) 0:09:58.258 ******* >2018-10-02 10:49:28,022 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbooks": ["/usr/share/ceph-ansible/site-docker.yml.sample"]}, "changed": false} >2018-10-02 10:49:28,037 p=605 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-10-02 10:49:28,038 p=605 u=mistral | Tuesday 02 October 2018 10:49:28 -0400 (0:00:00.053) 0:09:58.311 ******* >2018-10-02 10:49:28,079 p=605 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_command": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_CALLBACK_PLUGINS=/usr/share/ceph-ansible/plugins/callback/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ANSIBLE_REMOTE_TEMP=/tmp/ceph_ansible_tmp ANSIBLE_FORKS=25 ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml"}, "changed": false} >2018-10-02 10:49:28,095 p=605 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-10-02 10:49:28,095 p=605 u=mistral | Tuesday 02 October 2018 10:49:28 -0400 (0:00:00.057) 0:09:58.368 ******* >2018-10-02 10:49:47,459 p=605 u=mistral | failed: [undercloud] (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": true, "cmd": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_CALLBACK_PLUGINS=/usr/share/ceph-ansible/plugins/callback/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ANSIBLE_REMOTE_TEMP=/tmp/ceph_ansible_tmp ANSIBLE_FORKS=25 ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml /usr/share/ceph-ansible/site-docker.yml.sample", "delta": "0:00:19.180664", "end": "2018-10-02 10:49:47.425132", "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "msg": "non-zero return code", "rc": 1, "start": "2018-10-02 10:49:28.244468", "stderr": "[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \nThis feature will be removed in a future release. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n [WARNING]: Could not match supplied host pattern, ignoring: agents\n [WARNING]: Could not match supplied host pattern, ignoring: mdss\n [WARNING]: Could not match supplied host pattern, ignoring: rgws\n [WARNING]: Could not match supplied host pattern, ignoring: nfss\n [WARNING]: Could not match supplied host pattern, ignoring: restapis\n [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors\n [WARNING]: Could not match supplied host pattern, ignoring: iscsigws\n [WARNING]: Could not match supplied host pattern, ignoring: iscsi-gws\n [ERROR]: The python-notario library is missing. Please install it on the node\nyou are running ceph-ansible to continue.\nThe python-notario library is missing. Please install it on the node you are running ceph-ansible to continue.", "stderr_lines": ["[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use ", "'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. ", "This feature will be removed in a future release. Deprecation warnings can be ", "disabled by setting deprecation_warnings=False in ansible.cfg.", " [WARNING]: Could not match supplied host pattern, ignoring: agents", " [WARNING]: Could not match supplied host pattern, ignoring: mdss", " [WARNING]: Could not match supplied host pattern, ignoring: rgws", " [WARNING]: Could not match supplied host pattern, ignoring: nfss", " [WARNING]: Could not match supplied host pattern, ignoring: restapis", " [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors", " [WARNING]: Could not match supplied host pattern, ignoring: iscsigws", " [WARNING]: Could not match supplied host pattern, ignoring: iscsi-gws", " [ERROR]: The python-notario library is missing. Please install it on the node", "you are running ceph-ansible to continue.", "The python-notario library is missing. Please install it on the node you are running ceph-ansible to continue."], "stdout": "ansible-playbook 2.5.7\n config file = /usr/share/ceph-ansible/ansible.cfg\n configured module search path = [u'/usr/share/ceph-ansible/library']\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\n executable location = /usr/bin/ansible-playbook\n python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]\nUsing /usr/share/ceph-ansible/ansible.cfg as config file\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-validate/tasks/check_system.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-validate/tasks/check_devices.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-validate/tasks/check_eth_mon.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-validate/tasks/check_eth_rgw.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-iscsi-gw/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-iscsi-gw/tasks/non-container/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-iscsi-gw/tasks/deploy_ssl_keys.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-iscsi-gw/tasks/non-container/configure_iscsi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-iscsi-gw/tasks/container/containerized.yml\n\nPLAYBOOK: site-docker.yml.sample ***********************************************\n14 plays in /usr/share/ceph-ansible/site-docker.yml.sample\n\nPLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,iscsi-gws,mgrs] ***\n\nTASK [gather facts] ************************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:26\nTuesday 02 October 2018 10:49:32 -0400 (0:00:00.144) 0:00:00.144 ******* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [gather and delegate facts] ***********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:31\nTuesday 02 October 2018 10:49:32 -0400 (0:00:00.108) 0:00:00.252 ******* \nok: [controller-0 -> 192.168.24.10] => (item=compute-0)\nok: [controller-0 -> 192.168.24.12] => (item=controller-0)\nok: [controller-0 -> 192.168.24.6] => (item=ceph-0)\n\nTASK [check if it is atomic host] **********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:40\nTuesday 02 October 2018 10:49:45 -0400 (0:00:13.051) 0:00:13.304 ******* \nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [set_fact is_atomic] ******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:47\nTuesday 02 October 2018 10:49:46 -0400 (0:00:00.366) 0:00:13.670 ******* \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nMETA: ran handlers", "stdout_lines": ["ansible-playbook 2.5.7", " config file = /usr/share/ceph-ansible/ansible.cfg", " configured module search path = [u'/usr/share/ceph-ansible/library']", " ansible python module location = /usr/lib/python2.7/site-packages/ansible", " executable location = /usr/bin/ansible-playbook", " python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]", "Using /usr/share/ceph-ansible/ansible.cfg as config file", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-validate/tasks/check_system.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-validate/tasks/check_devices.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-validate/tasks/check_eth_mon.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-validate/tasks/check_eth_rgw.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-iscsi-gw/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-iscsi-gw/tasks/non-container/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-iscsi-gw/tasks/deploy_ssl_keys.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-iscsi-gw/tasks/non-container/configure_iscsi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-iscsi-gw/tasks/container/containerized.yml", "", "PLAYBOOK: site-docker.yml.sample ***********************************************", "14 plays in /usr/share/ceph-ansible/site-docker.yml.sample", "", "PLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,iscsi-gws,mgrs] ***", "", "TASK [gather facts] ************************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:26", "Tuesday 02 October 2018 10:49:32 -0400 (0:00:00.144) 0:00:00.144 ******* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [gather and delegate facts] ***********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:31", "Tuesday 02 October 2018 10:49:32 -0400 (0:00:00.108) 0:00:00.252 ******* ", "ok: [controller-0 -> 192.168.24.10] => (item=compute-0)", "ok: [controller-0 -> 192.168.24.12] => (item=controller-0)", "ok: [controller-0 -> 192.168.24.6] => (item=ceph-0)", "", "TASK [check if it is atomic host] **********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:40", "Tuesday 02 October 2018 10:49:45 -0400 (0:00:13.051) 0:00:13.304 ******* ", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [set_fact is_atomic] ******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:47", "Tuesday 02 October 2018 10:49:46 -0400 (0:00:00.366) 0:00:13.670 ******* ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "META: ran handlers"]} >2018-10-02 10:49:47,464 p=605 u=mistral | NO MORE HOSTS LEFT ************************************************************* >2018-10-02 10:49:47,464 p=605 u=mistral | PLAY RECAP ********************************************************************* >2018-10-02 10:49:47,465 p=605 u=mistral | ceph-0 : ok=109 changed=47 unreachable=0 failed=0 >2018-10-02 10:49:47,465 p=605 u=mistral | compute-0 : ok=130 changed=61 unreachable=0 failed=0 >2018-10-02 10:49:47,465 p=605 u=mistral | controller-0 : ok=174 changed=84 unreachable=0 failed=0 >2018-10-02 10:49:47,465 p=605 u=mistral | undercloud : ok=31 changed=10 unreachable=0 failed=1 >2018-10-02 10:49:47,466 p=605 u=mistral | Tuesday 02 October 2018 10:49:47 -0400 (0:00:19.371) 0:10:17.739 ******* >2018-10-02 10:49:47,466 p=605 u=mistral | ===============================================================================
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1635314
: 1489496